Test Report: KVM_Linux_crio 18171

                    
                      99de8c2f99c92d56089a7f0e4f6f6a405ebd3f59:2024-02-13:33127
                    
                

Test fail (24/310)

Order failed test Duration
39 TestAddons/parallel/Ingress 153.68
46 TestAddons/parallel/CloudSpanner 11.83
53 TestAddons/StoppedEnableDisable 154.06
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.88
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 170.93
224 TestMultiNode/serial/RestartKeepsNodes 690.82
226 TestMultiNode/serial/StopMultiNode 142.11
233 TestPreload 281.05
292 TestStartStop/group/old-k8s-version/serial/Stop 139.39
296 TestStartStop/group/no-preload/serial/Stop 138.87
299 TestStartStop/group/embed-certs/serial/Stop 138.81
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.89
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.57
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.67
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.53
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.47
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 511.84
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 160.19
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 74.87
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 169.27
x
+
TestAddons/parallel/Ingress (153.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-548360 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-548360 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-548360 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cdef0026-04e2-4f2d-a0be-076dce5a611b] Pending
helpers_test.go:344: "nginx" [cdef0026-04e2-4f2d-a0be-076dce5a611b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cdef0026-04e2-4f2d-a0be-076dce5a611b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006261209s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-548360 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.891155546s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-548360 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.217
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 addons disable ingress-dns --alsologtostderr -v=1: (1.600824329s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 addons disable ingress --alsologtostderr -v=1: (8.016965235s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-548360 -n addons-548360
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 logs -n 25: (1.414455099s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-452583                                                                     | download-only-452583 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-236740                                                                     | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-142558                                                                     | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-452583                                                                     | download-only-452583 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-720567 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | binary-mirror-720567                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46241                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-720567                                                                     | binary-mirror-720567 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | addons-548360                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | addons-548360                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-548360 --wait=true                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | -p addons-548360                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | addons-548360                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC |                     |
	|         | addons-548360                                                                               |                      |         |         |                     |                     |
	| ip      | addons-548360 ip                                                                            | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	| addons  | addons-548360 addons disable                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | -p addons-548360                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-548360 addons disable                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-548360 ssh cat                                                                       | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | /opt/local-path-provisioner/pvc-94c1659d-c197-459f-ae81-0c70edc6f082_default_test-pvc/file1 |                      |         |         |                     |                     |
	| ssh     | addons-548360 ssh curl -s                                                                   | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-548360 addons disable                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 22:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-548360 addons                                                                        | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-548360 addons                                                                        | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 22:00 UTC | 13 Feb 24 22:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-548360 addons                                                                        | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 22:01 UTC | 13 Feb 24 22:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-548360 ip                                                                            | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 22:01 UTC | 13 Feb 24 22:01 UTC |
	| addons  | addons-548360 addons disable                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 22:01 UTC | 13 Feb 24 22:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-548360 addons disable                                                                | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 22:02 UTC | 13 Feb 24 22:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:45.651714   16934 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:45.652000   16934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:45.652010   16934 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:45.652015   16934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:45.652194   16934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 21:56:45.652814   16934 out.go:298] Setting JSON to false
	I0213 21:56:45.653621   16934 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2357,"bootTime":1707859049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:45.653676   16934 start.go:138] virtualization: kvm guest
	I0213 21:56:45.655858   16934 out.go:177] * [addons-548360] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:45.657155   16934 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 21:56:45.657167   16934 notify.go:220] Checking for updates...
	I0213 21:56:45.658359   16934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:45.659536   16934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:56:45.660744   16934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:45.661864   16934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 21:56:45.662878   16934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 21:56:45.664140   16934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:45.695796   16934 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 21:56:45.697039   16934 start.go:298] selected driver: kvm2
	I0213 21:56:45.697051   16934 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:45.697063   16934 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 21:56:45.697768   16934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:45.697852   16934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:45.713374   16934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:45.713430   16934 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:45.713684   16934 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 21:56:45.713770   16934 cni.go:84] Creating CNI manager for ""
	I0213 21:56:45.713792   16934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:56:45.713807   16934 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:45.713818   16934 start_flags.go:321] config:
	{Name:addons-548360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:45.714020   16934 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:45.716658   16934 out.go:177] * Starting control plane node addons-548360 in cluster addons-548360
	I0213 21:56:45.717848   16934 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 21:56:45.717907   16934 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 21:56:45.717921   16934 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:45.717992   16934 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 21:56:45.718002   16934 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 21:56:45.718318   16934 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/config.json ...
	I0213 21:56:45.718339   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/config.json: {Name:mk96aacdba824faa4fb9e974154f4737e39c2ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:56:45.718467   16934 start.go:365] acquiring machines lock for addons-548360: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 21:56:45.718512   16934 start.go:369] acquired machines lock for "addons-548360" in 30.357µs
	I0213 21:56:45.718530   16934 start.go:93] Provisioning new machine with config: &{Name:addons-548360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 21:56:45.718581   16934 start.go:125] createHost starting for "" (driver="kvm2")
	I0213 21:56:45.720440   16934 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0213 21:56:45.720616   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:56:45.720681   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:56:45.734221   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0213 21:56:45.734646   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:56:45.735159   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:56:45.735184   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:56:45.735492   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:56:45.735643   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:56:45.735769   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:56:45.735889   16934 start.go:159] libmachine.API.Create for "addons-548360" (driver="kvm2")
	I0213 21:56:45.735924   16934 client.go:168] LocalClient.Create starting
	I0213 21:56:45.735962   16934 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem
	I0213 21:56:45.833929   16934 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem
	I0213 21:56:45.975275   16934 main.go:141] libmachine: Running pre-create checks...
	I0213 21:56:45.975297   16934 main.go:141] libmachine: (addons-548360) Calling .PreCreateCheck
	I0213 21:56:45.975781   16934 main.go:141] libmachine: (addons-548360) Calling .GetConfigRaw
	I0213 21:56:45.976194   16934 main.go:141] libmachine: Creating machine...
	I0213 21:56:45.976209   16934 main.go:141] libmachine: (addons-548360) Calling .Create
	I0213 21:56:45.976386   16934 main.go:141] libmachine: (addons-548360) Creating KVM machine...
	I0213 21:56:45.977697   16934 main.go:141] libmachine: (addons-548360) DBG | found existing default KVM network
	I0213 21:56:45.978473   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:45.978320   16956 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f210}
	I0213 21:56:45.984965   16934 main.go:141] libmachine: (addons-548360) DBG | trying to create private KVM network mk-addons-548360 192.168.39.0/24...
	I0213 21:56:46.051525   16934 main.go:141] libmachine: (addons-548360) DBG | private KVM network mk-addons-548360 192.168.39.0/24 created
	I0213 21:56:46.051568   16934 main.go:141] libmachine: (addons-548360) Setting up store path in /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360 ...
	I0213 21:56:46.051587   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.051505   16956 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:46.051615   16934 main.go:141] libmachine: (addons-548360) Building disk image from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 21:56:46.051635   16934 main.go:141] libmachine: (addons-548360) Downloading /home/jenkins/minikube-integration/18171-8990/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0213 21:56:46.265585   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.265445   16956 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa...
	I0213 21:56:46.408080   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.407963   16956 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/addons-548360.rawdisk...
	I0213 21:56:46.408106   16934 main.go:141] libmachine: (addons-548360) DBG | Writing magic tar header
	I0213 21:56:46.408116   16934 main.go:141] libmachine: (addons-548360) DBG | Writing SSH key tar header
	I0213 21:56:46.408127   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.408075   16956 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360 ...
	I0213 21:56:46.408143   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360
	I0213 21:56:46.408201   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360 (perms=drwx------)
	I0213 21:56:46.408229   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines (perms=drwxr-xr-x)
	I0213 21:56:46.408239   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines
	I0213 21:56:46.408257   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:46.408273   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990
	I0213 21:56:46.408290   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0213 21:56:46.408300   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins
	I0213 21:56:46.408317   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube (perms=drwxr-xr-x)
	I0213 21:56:46.408326   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home
	I0213 21:56:46.408347   16934 main.go:141] libmachine: (addons-548360) DBG | Skipping /home - not owner
	I0213 21:56:46.408366   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990 (perms=drwxrwxr-x)
	I0213 21:56:46.408378   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0213 21:56:46.408394   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0213 21:56:46.408408   16934 main.go:141] libmachine: (addons-548360) Creating domain...
	I0213 21:56:46.409852   16934 main.go:141] libmachine: (addons-548360) define libvirt domain using xml: 
	I0213 21:56:46.409895   16934 main.go:141] libmachine: (addons-548360) <domain type='kvm'>
	I0213 21:56:46.409908   16934 main.go:141] libmachine: (addons-548360)   <name>addons-548360</name>
	I0213 21:56:46.409917   16934 main.go:141] libmachine: (addons-548360)   <memory unit='MiB'>4000</memory>
	I0213 21:56:46.409927   16934 main.go:141] libmachine: (addons-548360)   <vcpu>2</vcpu>
	I0213 21:56:46.409936   16934 main.go:141] libmachine: (addons-548360)   <features>
	I0213 21:56:46.409944   16934 main.go:141] libmachine: (addons-548360)     <acpi/>
	I0213 21:56:46.409955   16934 main.go:141] libmachine: (addons-548360)     <apic/>
	I0213 21:56:46.409966   16934 main.go:141] libmachine: (addons-548360)     <pae/>
	I0213 21:56:46.409976   16934 main.go:141] libmachine: (addons-548360)     
	I0213 21:56:46.410009   16934 main.go:141] libmachine: (addons-548360)   </features>
	I0213 21:56:46.410037   16934 main.go:141] libmachine: (addons-548360)   <cpu mode='host-passthrough'>
	I0213 21:56:46.410051   16934 main.go:141] libmachine: (addons-548360)   
	I0213 21:56:46.410063   16934 main.go:141] libmachine: (addons-548360)   </cpu>
	I0213 21:56:46.410077   16934 main.go:141] libmachine: (addons-548360)   <os>
	I0213 21:56:46.410090   16934 main.go:141] libmachine: (addons-548360)     <type>hvm</type>
	I0213 21:56:46.410105   16934 main.go:141] libmachine: (addons-548360)     <boot dev='cdrom'/>
	I0213 21:56:46.410114   16934 main.go:141] libmachine: (addons-548360)     <boot dev='hd'/>
	I0213 21:56:46.410129   16934 main.go:141] libmachine: (addons-548360)     <bootmenu enable='no'/>
	I0213 21:56:46.410158   16934 main.go:141] libmachine: (addons-548360)   </os>
	I0213 21:56:46.410172   16934 main.go:141] libmachine: (addons-548360)   <devices>
	I0213 21:56:46.410190   16934 main.go:141] libmachine: (addons-548360)     <disk type='file' device='cdrom'>
	I0213 21:56:46.410209   16934 main.go:141] libmachine: (addons-548360)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/boot2docker.iso'/>
	I0213 21:56:46.410231   16934 main.go:141] libmachine: (addons-548360)       <target dev='hdc' bus='scsi'/>
	I0213 21:56:46.410244   16934 main.go:141] libmachine: (addons-548360)       <readonly/>
	I0213 21:56:46.410257   16934 main.go:141] libmachine: (addons-548360)     </disk>
	I0213 21:56:46.410271   16934 main.go:141] libmachine: (addons-548360)     <disk type='file' device='disk'>
	I0213 21:56:46.410295   16934 main.go:141] libmachine: (addons-548360)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0213 21:56:46.410319   16934 main.go:141] libmachine: (addons-548360)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/addons-548360.rawdisk'/>
	I0213 21:56:46.410335   16934 main.go:141] libmachine: (addons-548360)       <target dev='hda' bus='virtio'/>
	I0213 21:56:46.410346   16934 main.go:141] libmachine: (addons-548360)     </disk>
	I0213 21:56:46.410360   16934 main.go:141] libmachine: (addons-548360)     <interface type='network'>
	I0213 21:56:46.410372   16934 main.go:141] libmachine: (addons-548360)       <source network='mk-addons-548360'/>
	I0213 21:56:46.410400   16934 main.go:141] libmachine: (addons-548360)       <model type='virtio'/>
	I0213 21:56:46.410425   16934 main.go:141] libmachine: (addons-548360)     </interface>
	I0213 21:56:46.410438   16934 main.go:141] libmachine: (addons-548360)     <interface type='network'>
	I0213 21:56:46.410451   16934 main.go:141] libmachine: (addons-548360)       <source network='default'/>
	I0213 21:56:46.410465   16934 main.go:141] libmachine: (addons-548360)       <model type='virtio'/>
	I0213 21:56:46.410477   16934 main.go:141] libmachine: (addons-548360)     </interface>
	I0213 21:56:46.410490   16934 main.go:141] libmachine: (addons-548360)     <serial type='pty'>
	I0213 21:56:46.410499   16934 main.go:141] libmachine: (addons-548360)       <target port='0'/>
	I0213 21:56:46.410509   16934 main.go:141] libmachine: (addons-548360)     </serial>
	I0213 21:56:46.410514   16934 main.go:141] libmachine: (addons-548360)     <console type='pty'>
	I0213 21:56:46.410522   16934 main.go:141] libmachine: (addons-548360)       <target type='serial' port='0'/>
	I0213 21:56:46.410532   16934 main.go:141] libmachine: (addons-548360)     </console>
	I0213 21:56:46.410558   16934 main.go:141] libmachine: (addons-548360)     <rng model='virtio'>
	I0213 21:56:46.410581   16934 main.go:141] libmachine: (addons-548360)       <backend model='random'>/dev/random</backend>
	I0213 21:56:46.410597   16934 main.go:141] libmachine: (addons-548360)     </rng>
	I0213 21:56:46.410609   16934 main.go:141] libmachine: (addons-548360)     
	I0213 21:56:46.410623   16934 main.go:141] libmachine: (addons-548360)     
	I0213 21:56:46.410636   16934 main.go:141] libmachine: (addons-548360)   </devices>
	I0213 21:56:46.410650   16934 main.go:141] libmachine: (addons-548360) </domain>
	I0213 21:56:46.410665   16934 main.go:141] libmachine: (addons-548360) 
	I0213 21:56:46.415975   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:79:b1:f8 in network default
	I0213 21:56:46.416496   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:46.416516   16934 main.go:141] libmachine: (addons-548360) Ensuring networks are active...
	I0213 21:56:46.417247   16934 main.go:141] libmachine: (addons-548360) Ensuring network default is active
	I0213 21:56:46.417684   16934 main.go:141] libmachine: (addons-548360) Ensuring network mk-addons-548360 is active
	I0213 21:56:46.418274   16934 main.go:141] libmachine: (addons-548360) Getting domain xml...
	I0213 21:56:46.419012   16934 main.go:141] libmachine: (addons-548360) Creating domain...
	I0213 21:56:47.809666   16934 main.go:141] libmachine: (addons-548360) Waiting to get IP...
	I0213 21:56:47.810411   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:47.810857   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:47.810886   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:47.810827   16956 retry.go:31] will retry after 205.552225ms: waiting for machine to come up
	I0213 21:56:48.018429   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:48.018810   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:48.018834   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:48.018786   16956 retry.go:31] will retry after 353.436999ms: waiting for machine to come up
	I0213 21:56:48.373397   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:48.373891   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:48.373916   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:48.373840   16956 retry.go:31] will retry after 442.017345ms: waiting for machine to come up
	I0213 21:56:48.817683   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:48.818120   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:48.818158   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:48.818082   16956 retry.go:31] will retry after 401.54804ms: waiting for machine to come up
	I0213 21:56:49.221909   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:49.222386   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:49.222419   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:49.222321   16956 retry.go:31] will retry after 599.416194ms: waiting for machine to come up
	I0213 21:56:49.823133   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:49.823555   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:49.823592   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:49.823490   16956 retry.go:31] will retry after 792.814217ms: waiting for machine to come up
	I0213 21:56:50.617375   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:50.617929   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:50.617959   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:50.617852   16956 retry.go:31] will retry after 878.606074ms: waiting for machine to come up
	I0213 21:56:51.498453   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:51.498829   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:51.498856   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:51.498787   16956 retry.go:31] will retry after 1.376121244s: waiting for machine to come up
	I0213 21:56:52.876139   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:52.876641   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:52.876669   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:52.876587   16956 retry.go:31] will retry after 1.235409518s: waiting for machine to come up
	I0213 21:56:54.113466   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:54.113920   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:54.113947   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:54.113849   16956 retry.go:31] will retry after 1.675686898s: waiting for machine to come up
	I0213 21:56:55.791122   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:55.791540   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:55.791579   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:55.791458   16956 retry.go:31] will retry after 2.662216547s: waiting for machine to come up
	I0213 21:56:58.457312   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:58.457693   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:58.457723   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:58.457640   16956 retry.go:31] will retry after 2.61351666s: waiting for machine to come up
	I0213 21:57:01.072944   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:01.073387   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:57:01.073415   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:57:01.073325   16956 retry.go:31] will retry after 2.98804372s: waiting for machine to come up
	I0213 21:57:04.065418   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:04.065899   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:57:04.065930   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:57:04.065827   16956 retry.go:31] will retry after 4.324379457s: waiting for machine to come up
	I0213 21:57:08.393248   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:08.393624   16934 main.go:141] libmachine: (addons-548360) Found IP for machine: 192.168.39.217
	I0213 21:57:08.393649   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has current primary IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:08.393659   16934 main.go:141] libmachine: (addons-548360) Reserving static IP address...
	I0213 21:57:08.394090   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find host DHCP lease matching {name: "addons-548360", mac: "52:54:00:25:20:5b", ip: "192.168.39.217"} in network mk-addons-548360
	I0213 21:57:08.475351   16934 main.go:141] libmachine: (addons-548360) Reserved static IP address: 192.168.39.217
	I0213 21:57:08.475375   16934 main.go:141] libmachine: (addons-548360) Waiting for SSH to be available...
	I0213 21:57:08.475420   16934 main.go:141] libmachine: (addons-548360) DBG | Getting to WaitForSSH function...
	I0213 21:57:08.478138   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:08.478490   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360
	I0213 21:57:08.478511   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find defined IP address of network mk-addons-548360 interface with MAC address 52:54:00:25:20:5b
	I0213 21:57:08.478790   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH client type: external
	I0213 21:57:08.478818   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa (-rw-------)
	I0213 21:57:08.478858   16934 main.go:141] libmachine: (addons-548360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 21:57:08.478875   16934 main.go:141] libmachine: (addons-548360) DBG | About to run SSH command:
	I0213 21:57:08.478896   16934 main.go:141] libmachine: (addons-548360) DBG | exit 0
	I0213 21:57:08.489393   16934 main.go:141] libmachine: (addons-548360) DBG | SSH cmd err, output: exit status 255: 
	I0213 21:57:08.489421   16934 main.go:141] libmachine: (addons-548360) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0213 21:57:08.489429   16934 main.go:141] libmachine: (addons-548360) DBG | command : exit 0
	I0213 21:57:08.489435   16934 main.go:141] libmachine: (addons-548360) DBG | err     : exit status 255
	I0213 21:57:08.489447   16934 main.go:141] libmachine: (addons-548360) DBG | output  : 
	I0213 21:57:11.490210   16934 main.go:141] libmachine: (addons-548360) DBG | Getting to WaitForSSH function...
	I0213 21:57:11.492757   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.493122   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.493166   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.493222   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH client type: external
	I0213 21:57:11.493249   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa (-rw-------)
	I0213 21:57:11.493286   16934 main.go:141] libmachine: (addons-548360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 21:57:11.493325   16934 main.go:141] libmachine: (addons-548360) DBG | About to run SSH command:
	I0213 21:57:11.493356   16934 main.go:141] libmachine: (addons-548360) DBG | exit 0
	I0213 21:57:11.589967   16934 main.go:141] libmachine: (addons-548360) DBG | SSH cmd err, output: <nil>: 
	I0213 21:57:11.590212   16934 main.go:141] libmachine: (addons-548360) KVM machine creation complete!
	I0213 21:57:11.590560   16934 main.go:141] libmachine: (addons-548360) Calling .GetConfigRaw
	I0213 21:57:11.591153   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:11.591349   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:11.591493   16934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 21:57:11.591509   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:11.592827   16934 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 21:57:11.592846   16934 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 21:57:11.592856   16934 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 21:57:11.592866   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:11.595301   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.595698   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.595723   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.595896   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:11.596216   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.596377   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.596514   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:11.596717   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:11.597207   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:11.597227   16934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 21:57:11.729363   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 21:57:11.729439   16934 main.go:141] libmachine: Detecting the provisioner...
	I0213 21:57:11.729454   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:11.732148   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.732530   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.732561   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.732743   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:11.732938   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.733093   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.733235   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:11.733415   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:11.733712   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:11.733723   16934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 21:57:11.866937   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 21:57:11.867092   16934 main.go:141] libmachine: found compatible host: buildroot
	I0213 21:57:11.867119   16934 main.go:141] libmachine: Provisioning with buildroot...
	I0213 21:57:11.867131   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:57:11.867417   16934 buildroot.go:166] provisioning hostname "addons-548360"
	I0213 21:57:11.867440   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:57:11.867665   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:11.870535   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.870962   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.871000   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.871147   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:11.871392   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.871602   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.871715   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:11.871899   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:11.872203   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:11.872220   16934 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-548360 && echo "addons-548360" | sudo tee /etc/hostname
	I0213 21:57:12.019622   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-548360
	
	I0213 21:57:12.019655   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.022459   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.022777   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.022814   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.022963   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.023178   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.023346   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.023487   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.023655   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:12.023969   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:12.023987   16934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-548360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-548360/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-548360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 21:57:12.163895   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 21:57:12.163923   16934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 21:57:12.163956   16934 buildroot.go:174] setting up certificates
	I0213 21:57:12.163969   16934 provision.go:83] configureAuth start
	I0213 21:57:12.163982   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:57:12.164252   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:12.166791   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.167133   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.167168   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.167345   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.169702   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.170072   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.170104   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.170211   16934 provision.go:138] copyHostCerts
	I0213 21:57:12.170298   16934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 21:57:12.170446   16934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 21:57:12.170513   16934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 21:57:12.170564   16934 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.addons-548360 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube addons-548360]
	I0213 21:57:12.411394   16934 provision.go:172] copyRemoteCerts
	I0213 21:57:12.411461   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 21:57:12.411482   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.414122   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.414437   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.414461   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.414651   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.414845   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.414979   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.415116   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:12.510978   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 21:57:12.535503   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0213 21:57:12.561952   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 21:57:12.586230   16934 provision.go:86] duration metric: configureAuth took 422.246144ms
	I0213 21:57:12.586258   16934 buildroot.go:189] setting minikube options for container-runtime
	I0213 21:57:12.586451   16934 config.go:182] Loaded profile config "addons-548360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 21:57:12.586520   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.589319   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.589706   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.589738   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.589978   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.590184   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.590358   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.590477   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.590642   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:12.590943   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:12.590958   16934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 21:57:12.914808   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 21:57:12.914834   16934 main.go:141] libmachine: Checking connection to Docker...
	I0213 21:57:12.914848   16934 main.go:141] libmachine: (addons-548360) Calling .GetURL
	I0213 21:57:12.916240   16934 main.go:141] libmachine: (addons-548360) DBG | Using libvirt version 6000000
	I0213 21:57:12.918551   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.918976   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.919004   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.919177   16934 main.go:141] libmachine: Docker is up and running!
	I0213 21:57:12.919195   16934 main.go:141] libmachine: Reticulating splines...
	I0213 21:57:12.919212   16934 client.go:171] LocalClient.Create took 27.183268828s
	I0213 21:57:12.919238   16934 start.go:167] duration metric: libmachine.API.Create for "addons-548360" took 27.18335019s
	I0213 21:57:12.919251   16934 start.go:300] post-start starting for "addons-548360" (driver="kvm2")
	I0213 21:57:12.919267   16934 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 21:57:12.919289   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:12.919521   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 21:57:12.919547   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.921705   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.922023   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.922065   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.922201   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.922491   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.922696   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.922843   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:13.019668   16934 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 21:57:13.023917   16934 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 21:57:13.023951   16934 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 21:57:13.024031   16934 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 21:57:13.024055   16934 start.go:303] post-start completed in 104.795067ms
	I0213 21:57:13.024089   16934 main.go:141] libmachine: (addons-548360) Calling .GetConfigRaw
	I0213 21:57:13.024654   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:13.027127   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.027400   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.027422   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.027659   16934 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/config.json ...
	I0213 21:57:13.027869   16934 start.go:128] duration metric: createHost completed in 27.309278567s
	I0213 21:57:13.027893   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:13.030177   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.030497   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.030525   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.030682   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:13.030884   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.031012   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.031129   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:13.031249   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:13.031539   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:13.031551   16934 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 21:57:13.162435   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707861433.146678870
	
	I0213 21:57:13.162455   16934 fix.go:206] guest clock: 1707861433.146678870
	I0213 21:57:13.162464   16934 fix.go:219] Guest: 2024-02-13 21:57:13.14667887 +0000 UTC Remote: 2024-02-13 21:57:13.027880377 +0000 UTC m=+27.428197058 (delta=118.798493ms)
	I0213 21:57:13.162524   16934 fix.go:190] guest clock delta is within tolerance: 118.798493ms
	I0213 21:57:13.162531   16934 start.go:83] releasing machines lock for "addons-548360", held for 27.444007014s
	I0213 21:57:13.162562   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.162844   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:13.165380   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.165773   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.165803   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.166010   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.166524   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.166699   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.166816   16934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 21:57:13.166859   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:13.166986   16934 ssh_runner.go:195] Run: cat /version.json
	I0213 21:57:13.167012   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:13.169714   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.169881   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.170122   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.170181   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.170256   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.170290   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:13.170298   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.170483   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:13.170485   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.170680   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:13.170691   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.170825   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:13.170845   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:13.170925   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:13.263317   16934 ssh_runner.go:195] Run: systemctl --version
	I0213 21:57:13.285698   16934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 21:57:13.450747   16934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 21:57:13.457033   16934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 21:57:13.457107   16934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 21:57:13.474063   16934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 21:57:13.474085   16934 start.go:475] detecting cgroup driver to use...
	I0213 21:57:13.474212   16934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 21:57:13.492777   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 21:57:13.506672   16934 docker.go:217] disabling cri-docker service (if available) ...
	I0213 21:57:13.506750   16934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 21:57:13.520577   16934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 21:57:13.534578   16934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 21:57:13.647415   16934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 21:57:13.770687   16934 docker.go:233] disabling docker service ...
	I0213 21:57:13.770763   16934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 21:57:13.784855   16934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 21:57:13.797613   16934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 21:57:13.910187   16934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 21:57:14.020787   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 21:57:14.033523   16934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 21:57:14.050901   16934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 21:57:14.050974   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.061538   16934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 21:57:14.061603   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.072429   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.083849   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.095122   16934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 21:57:14.106290   16934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 21:57:14.116196   16934 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 21:57:14.116273   16934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 21:57:14.129705   16934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 21:57:14.139786   16934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 21:57:14.238953   16934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 21:57:14.405093   16934 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 21:57:14.405167   16934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 21:57:14.409981   16934 start.go:543] Will wait 60s for crictl version
	I0213 21:57:14.410037   16934 ssh_runner.go:195] Run: which crictl
	I0213 21:57:14.413605   16934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 21:57:14.448266   16934 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 21:57:14.448402   16934 ssh_runner.go:195] Run: crio --version
	I0213 21:57:14.499943   16934 ssh_runner.go:195] Run: crio --version
	I0213 21:57:14.552154   16934 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 21:57:14.553432   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:14.555983   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:14.556330   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:14.556347   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:14.556559   16934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 21:57:14.560599   16934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 21:57:14.573837   16934 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 21:57:14.573915   16934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:14.608580   16934 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 21:57:14.608666   16934 ssh_runner.go:195] Run: which lz4
	I0213 21:57:14.612512   16934 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 21:57:14.616610   16934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 21:57:14.616647   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 21:57:16.408492   16934 crio.go:444] Took 1.796006 seconds to copy over tarball
	I0213 21:57:16.408566   16934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 21:57:19.845841   16934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.437246871s)
	I0213 21:57:19.845892   16934 crio.go:451] Took 3.437365 seconds to extract the tarball
	I0213 21:57:19.845907   16934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 21:57:19.886917   16934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:19.959974   16934 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 21:57:19.960000   16934 cache_images.go:84] Images are preloaded, skipping loading
	I0213 21:57:19.960092   16934 ssh_runner.go:195] Run: crio config
	I0213 21:57:20.029119   16934 cni.go:84] Creating CNI manager for ""
	I0213 21:57:20.029138   16934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:57:20.029157   16934 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 21:57:20.029174   16934 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-548360 NodeName:addons-548360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 21:57:20.029329   16934 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-548360"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 21:57:20.029417   16934 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-548360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 21:57:20.029481   16934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 21:57:20.038121   16934 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 21:57:20.038205   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 21:57:20.046271   16934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0213 21:57:20.063065   16934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 21:57:20.079248   16934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0213 21:57:20.094311   16934 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0213 21:57:20.098033   16934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 21:57:20.110872   16934 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360 for IP: 192.168.39.217
	I0213 21:57:20.110905   16934 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.111035   16934 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 21:57:20.277450   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt ...
	I0213 21:57:20.277477   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt: {Name:mk31e81c6fcf369272e568a89360f64eaee632c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.277635   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key ...
	I0213 21:57:20.277647   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key: {Name:mk5a13bfb25b8f575804165b4b8a96685b384af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.277713   16934 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 21:57:20.445135   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt ...
	I0213 21:57:20.445165   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt: {Name:mka72b4c29ed9f2eedab8eb8d31a798dd480cbc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.445319   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key ...
	I0213 21:57:20.445330   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key: {Name:mkcf59b560f8ce9f58eb3ce5a7742414c4473ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.445431   16934 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.key
	I0213 21:57:20.445445   16934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt with IP's: []
	I0213 21:57:20.763645   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt ...
	I0213 21:57:20.763681   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: {Name:mk2c996c13a9e43ea51358519a302c77d5aaecdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.763905   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.key ...
	I0213 21:57:20.763921   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.key: {Name:mkc0a31db82e609a57c11d8ec4cf3f8e14dda8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.764020   16934 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f
	I0213 21:57:20.764039   16934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 21:57:20.920272   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f ...
	I0213 21:57:20.920304   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f: {Name:mk302d4b693f6d2f2213e0fbf36bf07e73d6785e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.920477   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f ...
	I0213 21:57:20.920494   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f: {Name:mk13522a64bd02534e8ec080df3d0b52a53cf69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.920590   16934 certs.go:337] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt
	I0213 21:57:20.920660   16934 certs.go:341] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key
	I0213 21:57:20.920708   16934 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key
	I0213 21:57:20.920724   16934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt with IP's: []
	I0213 21:57:20.983569   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt ...
	I0213 21:57:20.983600   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt: {Name:mk2e43f2d8ba0f16d8d65857771ea6ff735ff239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.983766   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key ...
	I0213 21:57:20.983781   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key: {Name:mk1aca1e5999dbaad0b06a5aa832f0f6fd0a622a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.983987   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 21:57:20.984022   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 21:57:20.984047   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 21:57:20.984070   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 21:57:20.984625   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 21:57:21.009637   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 21:57:21.033701   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 21:57:21.058037   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 21:57:21.082777   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 21:57:21.109205   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 21:57:21.134205   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 21:57:21.158492   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 21:57:21.181609   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 21:57:21.203737   16934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 21:57:21.219946   16934 ssh_runner.go:195] Run: openssl version
	I0213 21:57:21.225702   16934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 21:57:21.235845   16934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:21.240484   16934 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:21.240542   16934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:21.246279   16934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 21:57:21.256238   16934 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 21:57:21.260501   16934 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 21:57:21.260560   16934 kubeadm.go:404] StartCluster: {Name:addons-548360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:57:21.260626   16934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 21:57:21.260672   16934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 21:57:21.301542   16934 cri.go:89] found id: ""
	I0213 21:57:21.301616   16934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 21:57:21.310558   16934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 21:57:21.320254   16934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 21:57:21.330149   16934 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 21:57:21.330212   16934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 21:57:21.383177   16934 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 21:57:21.383473   16934 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 21:57:21.522929   16934 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 21:57:21.523044   16934 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 21:57:21.523170   16934 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 21:57:21.760803   16934 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 21:57:22.005058   16934 out.go:204]   - Generating certificates and keys ...
	I0213 21:57:22.005162   16934 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 21:57:22.005236   16934 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 21:57:22.055445   16934 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 21:57:22.227856   16934 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 21:57:22.284356   16934 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 21:57:22.659705   16934 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 21:57:22.790844   16934 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 21:57:22.790995   16934 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-548360 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0213 21:57:22.942718   16934 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 21:57:22.942902   16934 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-548360 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0213 21:57:23.060728   16934 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 21:57:23.164026   16934 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 21:57:23.223136   16934 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 21:57:23.223218   16934 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 21:57:23.593052   16934 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 21:57:23.704168   16934 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 21:57:23.849238   16934 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 21:57:23.925681   16934 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 21:57:23.926406   16934 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 21:57:23.928771   16934 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 21:57:23.930838   16934 out.go:204]   - Booting up control plane ...
	I0213 21:57:23.930956   16934 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 21:57:23.931047   16934 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 21:57:23.931164   16934 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 21:57:23.947096   16934 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 21:57:23.947621   16934 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 21:57:23.947669   16934 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 21:57:24.084614   16934 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 21:57:33.085602   16934 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002336 seconds
	I0213 21:57:33.085834   16934 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 21:57:33.101909   16934 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 21:57:33.634165   16934 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 21:57:33.634401   16934 kubeadm.go:322] [mark-control-plane] Marking the node addons-548360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 21:57:34.149693   16934 kubeadm.go:322] [bootstrap-token] Using token: cbmtcn.y9dyg9a87331xks9
	I0213 21:57:34.151220   16934 out.go:204]   - Configuring RBAC rules ...
	I0213 21:57:34.151341   16934 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 21:57:34.159192   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 21:57:34.167576   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 21:57:34.171544   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 21:57:34.177646   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 21:57:34.182556   16934 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 21:57:34.199342   16934 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 21:57:34.423464   16934 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 21:57:34.588774   16934 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 21:57:34.589896   16934 kubeadm.go:322] 
	I0213 21:57:34.589952   16934 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 21:57:34.589964   16934 kubeadm.go:322] 
	I0213 21:57:34.590032   16934 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 21:57:34.590059   16934 kubeadm.go:322] 
	I0213 21:57:34.590109   16934 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 21:57:34.590194   16934 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 21:57:34.590278   16934 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 21:57:34.590297   16934 kubeadm.go:322] 
	I0213 21:57:34.590377   16934 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 21:57:34.590394   16934 kubeadm.go:322] 
	I0213 21:57:34.590476   16934 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 21:57:34.590485   16934 kubeadm.go:322] 
	I0213 21:57:34.590575   16934 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 21:57:34.590688   16934 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 21:57:34.590781   16934 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 21:57:34.590792   16934 kubeadm.go:322] 
	I0213 21:57:34.590902   16934 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 21:57:34.591010   16934 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 21:57:34.591034   16934 kubeadm.go:322] 
	I0213 21:57:34.591141   16934 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cbmtcn.y9dyg9a87331xks9 \
	I0213 21:57:34.591269   16934 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 21:57:34.591301   16934 kubeadm.go:322] 	--control-plane 
	I0213 21:57:34.591311   16934 kubeadm.go:322] 
	I0213 21:57:34.591408   16934 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 21:57:34.591419   16934 kubeadm.go:322] 
	I0213 21:57:34.591510   16934 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cbmtcn.y9dyg9a87331xks9 \
	I0213 21:57:34.591643   16934 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 21:57:34.591968   16934 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 21:57:34.591994   16934 cni.go:84] Creating CNI manager for ""
	I0213 21:57:34.592010   16934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:57:34.593754   16934 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 21:57:34.595036   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 21:57:34.646395   16934 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 21:57:34.707445   16934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 21:57:34.707538   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:34.707541   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=addons-548360 minikube.k8s.io/updated_at=2024_02_13T21_57_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:34.924172   16934 ops.go:34] apiserver oom_adj: -16
	I0213 21:57:34.924293   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:35.424455   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:35.924977   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.424360   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.924887   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:37.424355   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:37.925341   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:38.425207   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:38.924889   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:39.424552   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:39.924705   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:40.425099   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:40.924859   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:41.424397   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:41.925361   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:42.424971   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:42.924424   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:43.425109   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:43.924938   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:44.425167   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:44.924635   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:45.424385   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:45.924499   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:46.424527   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:46.924901   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:47.029676   16934 kubeadm.go:1088] duration metric: took 12.322204741s to wait for elevateKubeSystemPrivileges.
	I0213 21:57:47.029706   16934 kubeadm.go:406] StartCluster complete in 25.769151528s
	I0213 21:57:47.029723   16934 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:47.029855   16934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:57:47.030230   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:47.030425   16934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 21:57:47.030482   16934 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0213 21:57:47.030590   16934 addons.go:69] Setting ingress-dns=true in profile "addons-548360"
	I0213 21:57:47.030604   16934 addons.go:69] Setting yakd=true in profile "addons-548360"
	I0213 21:57:47.030617   16934 addons.go:234] Setting addon ingress-dns=true in "addons-548360"
	I0213 21:57:47.030629   16934 addons.go:234] Setting addon yakd=true in "addons-548360"
	I0213 21:57:47.030640   16934 addons.go:69] Setting default-storageclass=true in profile "addons-548360"
	I0213 21:57:47.030659   16934 config.go:182] Loaded profile config "addons-548360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 21:57:47.030666   16934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-548360"
	I0213 21:57:47.030669   16934 addons.go:69] Setting metrics-server=true in profile "addons-548360"
	I0213 21:57:47.030675   16934 addons.go:69] Setting gcp-auth=true in profile "addons-548360"
	I0213 21:57:47.030679   16934 addons.go:69] Setting volumesnapshots=true in profile "addons-548360"
	I0213 21:57:47.030682   16934 addons.go:234] Setting addon metrics-server=true in "addons-548360"
	I0213 21:57:47.030691   16934 addons.go:234] Setting addon volumesnapshots=true in "addons-548360"
	I0213 21:57:47.030660   16934 addons.go:69] Setting inspektor-gadget=true in profile "addons-548360"
	I0213 21:57:47.030697   16934 addons.go:69] Setting helm-tiller=true in profile "addons-548360"
	I0213 21:57:47.030686   16934 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-548360"
	I0213 21:57:47.030691   16934 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-548360"
	I0213 21:57:47.030711   16934 addons.go:69] Setting ingress=true in profile "addons-548360"
	I0213 21:57:47.030716   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030717   16934 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-548360"
	I0213 21:57:47.030723   16934 addons.go:234] Setting addon ingress=true in "addons-548360"
	I0213 21:57:47.030724   16934 addons.go:69] Setting cloud-spanner=true in profile "addons-548360"
	I0213 21:57:47.030726   16934 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-548360"
	I0213 21:57:47.030735   16934 addons.go:234] Setting addon cloud-spanner=true in "addons-548360"
	I0213 21:57:47.030755   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030756   16934 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-548360"
	I0213 21:57:47.030770   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030787   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030636   16934 addons.go:69] Setting registry=true in profile "addons-548360"
	I0213 21:57:47.030828   16934 addons.go:234] Setting addon registry=true in "addons-548360"
	I0213 21:57:47.030859   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030670   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030670   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031143   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031166   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031184   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030719   16934 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-548360"
	I0213 21:57:47.031241   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031260   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031271   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030706   16934 addons.go:234] Setting addon inspektor-gadget=true in "addons-548360"
	I0213 21:57:47.030722   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031311   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031350   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030717   16934 addons.go:69] Setting storage-provisioner=true in profile "addons-548360"
	I0213 21:57:47.031394   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031398   16934 addons.go:234] Setting addon storage-provisioner=true in "addons-548360"
	I0213 21:57:47.031225   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030707   16934 addons.go:234] Setting addon helm-tiller=true in "addons-548360"
	I0213 21:57:47.031229   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031425   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.030693   16934 mustload.go:65] Loading cluster: addons-548360
	I0213 21:57:47.031226   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031477   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031505   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031307   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031523   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031564   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031590   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031275   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031633   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031638   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031663   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031712   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031731   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031905   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031943   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031971   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.032006   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.032214   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.045854   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 21:57:47.046776   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.047275   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.047296   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.047646   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.048199   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.048220   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.048237   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0213 21:57:47.048629   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.049042   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.049057   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.049361   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.049545   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.049797   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0213 21:57:47.050432   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.050471   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.050671   16934 config.go:182] Loaded profile config "addons-548360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 21:57:47.051029   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.051063   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.051502   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.052303   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.052322   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.053241   16934 addons.go:234] Setting addon default-storageclass=true in "addons-548360"
	I0213 21:57:47.053269   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.053590   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.053614   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.057781   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.058554   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.058583   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.088525   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0213 21:57:47.088764   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0213 21:57:47.089162   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.089262   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.089843   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.089863   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.090244   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.090260   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.090545   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.090586   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.091098   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.091150   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.091785   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.091820   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.094306   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0213 21:57:47.094592   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0213 21:57:47.094770   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.095403   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.095545   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.095564   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.095818   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0213 21:57:47.095979   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.096263   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.096280   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.096612   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.096673   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.096714   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.097121   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.097155   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.097949   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0213 21:57:47.098521   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.098618   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.099050   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.099068   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.099455   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.099591   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.099605   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.100036   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.100067   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.100355   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.100532   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.103621   16934 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-548360"
	I0213 21:57:47.103665   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.104077   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.104109   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.108071   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0213 21:57:47.108543   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.109096   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.109114   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.109477   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.110007   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.110041   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.110245   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0213 21:57:47.115905   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0213 21:57:47.116559   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.117173   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.117205   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.117529   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.118089   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.118129   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.118340   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43627
	I0213 21:57:47.119276   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0213 21:57:47.120443   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.120769   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I0213 21:57:47.120981   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.121056   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.121082   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.121510   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.121857   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.122084   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.122104   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.122186   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.122478   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.122522   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.122557   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.122693   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.122713   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.122960   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.123701   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.123714   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.124518   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.124757   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.126924   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0213 21:57:47.125558   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.125677   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.130033   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0213 21:57:47.131428   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0213 21:57:47.130566   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I0213 21:57:47.130597   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0213 21:57:47.130850   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.132834   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0213 21:57:47.135131   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0213 21:57:47.134182   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.134213   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0213 21:57:47.134245   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.135771   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.135842   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.136020   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0213 21:57:47.138009   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0213 21:57:47.139062   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0213 21:57:47.137089   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.137122   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0213 21:57:47.137240   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.137255   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0213 21:57:47.137277   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.137451   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.137624   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0213 21:57:47.137759   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.140059   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.141030   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:47.142661   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0213 21:57:47.140676   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.141116   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.141209   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.141953   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.142025   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.142251   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.142283   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.142502   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.142621   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0213 21:57:47.145036   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:47.145112   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.146277   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144171   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144215   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144254   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0213 21:57:47.146422   16934 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 21:57:47.146436   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0213 21:57:47.144342   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.146454   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.146456   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144545   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.144809   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.145478   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.143884   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0213 21:57:47.146526   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0213 21:57:47.146538   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.146538   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.146493   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.147257   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147267   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0213 21:57:47.147320   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147361   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147395   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.147440   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.147477   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147961   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.147985   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.148374   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.148710   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.148756   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.148834   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.148853   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.148935   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.149380   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.149396   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.149531   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.149552   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.149797   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.149953   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.150015   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.150218   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.150447   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.150678   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.150824   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0213 21:57:47.151578   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.152662   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.152679   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.152743   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.152798   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.152839   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.153878   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.155566   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0213 21:57:47.156731   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0213 21:57:47.156750   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0213 21:57:47.155651   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.156775   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.158032   16934 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0213 21:57:47.154726   16934 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 21:57:47.154892   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.155063   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.155189   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.154367   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.155793   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.156977   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.159265   16934 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0213 21:57:47.159323   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0213 21:57:47.159344   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.159395   16934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 21:57:47.161217   16934 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 21:57:47.161232   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 21:57:47.161247   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.159480   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 21:57:47.161311   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.159510   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.161358   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.159536   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.161380   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.160074   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.160097   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.162747   16934 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0213 21:57:47.162788   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0213 21:57:47.160303   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.160875   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.164227   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.161610   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.162518   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.164269   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.160177   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.163235   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.164293   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.164110   16934 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0213 21:57:47.164349   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0213 21:57:47.164369   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.164327   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.164529   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.164512   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.164573   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.164613   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.164728   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.165001   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.166018   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.166027   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.166058   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.167409   16934 out.go:177]   - Using image docker.io/registry:2.8.3
	I0213 21:57:47.166108   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.166149   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.166358   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.166405   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.166797   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.166804   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.166969   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.167327   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0213 21:57:47.170521   16934 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0213 21:57:47.168953   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.168978   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.169017   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.169511   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.169529   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.170035   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.170086   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.170109   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.170814   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.171989   16934 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0213 21:57:47.172006   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.172012   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0213 21:57:47.172029   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.172031   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.172820   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.172869   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.172908   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.172921   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.172938   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.173000   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0213 21:57:47.173007   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.173046   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.173254   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.173319   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.173674   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.173698   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.174191   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.174224   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.174407   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.174685   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.174702   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.175052   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.175204   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.175864   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.177585   16934 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0213 21:57:47.178764   16934 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 21:57:47.178781   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0213 21:57:47.178796   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.176982   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.177406   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.178883   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.178910   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.178121   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.180268   16934 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0213 21:57:47.179123   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.181393   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0213 21:57:47.181407   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0213 21:57:47.181425   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.181600   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.181729   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.183263   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.183859   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.183885   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.184050   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.184236   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.184374   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.184490   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.185009   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.185438   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.185458   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.185620   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.185734   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.185834   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.185929   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.192254   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I0213 21:57:47.192362   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0213 21:57:47.192732   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.192810   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.193237   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.193266   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.193698   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.193704   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.193721   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.194057   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.194098   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.194140   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0213 21:57:47.194474   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.194662   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.194955   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.194982   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.195380   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.195554   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.196027   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.197947   16934 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0213 21:57:47.196695   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.197085   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.199344   16934 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0213 21:57:47.199359   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0213 21:57:47.199376   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.197762   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0213 21:57:47.200692   16934 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0213 21:57:47.199793   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.201932   16934 out.go:177]   - Using image docker.io/busybox:stable
	I0213 21:57:47.203167   16934 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0213 21:57:47.201948   16934 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 21:57:47.202386   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.202561   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.203029   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.204430   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0213 21:57:47.204454   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.204463   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.204511   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.204537   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.204584   16934 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 21:57:47.204593   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0213 21:57:47.204606   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.205067   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.205117   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.205305   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.205478   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.205539   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.207410   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.207766   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.207801   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.207905   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.207977   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.208012   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.208058   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.209527   16934 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0213 21:57:47.208422   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.208453   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.208526   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.210797   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 21:57:47.210824   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 21:57:47.210838   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.210862   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.210896   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.211378   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.211410   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.211612   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.213531   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.213812   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.213831   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.213968   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.214110   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.214237   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.214341   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.341672   16934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 21:57:47.447686   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0213 21:57:47.452946   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0213 21:57:47.452965   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0213 21:57:47.466985   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 21:57:47.468370   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 21:57:47.502966   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 21:57:47.538541   16934 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0213 21:57:47.538572   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0213 21:57:47.586308   16934 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0213 21:57:47.586331   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0213 21:57:47.594846   16934 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0213 21:57:47.594869   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0213 21:57:47.604393   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 21:57:47.623156   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 21:57:47.655990   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 21:57:47.670221   16934 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0213 21:57:47.670248   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0213 21:57:47.671130   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 21:57:47.671147   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0213 21:57:47.679982   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0213 21:57:47.680011   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0213 21:57:47.690361   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0213 21:57:47.690381   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0213 21:57:47.721338   16934 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0213 21:57:47.721368   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0213 21:57:47.744448   16934 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0213 21:57:47.744467   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0213 21:57:47.799636   16934 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0213 21:57:47.799658   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0213 21:57:47.822602   16934 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-548360" context rescaled to 1 replicas
	I0213 21:57:47.822648   16934 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 21:57:47.824421   16934 out.go:177] * Verifying Kubernetes components...
	I0213 21:57:47.825670   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 21:57:47.885945   16934 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 21:57:47.885970   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0213 21:57:47.920330   16934 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0213 21:57:47.920360   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0213 21:57:48.022962   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0213 21:57:48.022993   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0213 21:57:48.024577   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 21:57:48.024597   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 21:57:48.044501   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0213 21:57:48.044532   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0213 21:57:48.045118   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0213 21:57:48.051693   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 21:57:48.095850   16934 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0213 21:57:48.095883   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0213 21:57:48.107249   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0213 21:57:48.107277   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0213 21:57:48.161641   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0213 21:57:48.161663   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0213 21:57:48.199292   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 21:57:48.199318   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 21:57:48.212361   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0213 21:57:48.212382   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0213 21:57:48.219986   16934 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:48.220006   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0213 21:57:48.260428   16934 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0213 21:57:48.260449   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0213 21:57:48.296036   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0213 21:57:48.296064   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0213 21:57:48.305511   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0213 21:57:48.305545   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0213 21:57:48.321860   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:48.341904   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 21:57:48.357546   16934 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0213 21:57:48.357569   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0213 21:57:48.373747   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0213 21:57:48.373767   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0213 21:57:48.402607   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0213 21:57:48.453552   16934 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0213 21:57:48.453580   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0213 21:57:48.465745   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0213 21:57:48.465764   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0213 21:57:48.565468   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0213 21:57:48.565489   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0213 21:57:48.567452   16934 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 21:57:48.567471   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0213 21:57:48.632273   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0213 21:57:48.632299   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0213 21:57:48.644098   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 21:57:48.705183   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 21:57:48.705206   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0213 21:57:48.777708   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 21:57:51.631112   16934 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.289381169s)
	I0213 21:57:51.631157   16934 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 21:57:53.203298   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.755572894s)
	I0213 21:57:53.203353   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:53.203367   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:53.203671   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:53.203692   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:53.203703   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:53.203715   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:53.203950   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:53.203952   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:53.203964   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:54.160937   16934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0213 21:57:54.160973   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:54.164341   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.164789   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:54.164821   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.165081   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:54.165261   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:54.165411   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:54.165574   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:54.350007   16934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0213 21:57:54.384394   16934 addons.go:234] Setting addon gcp-auth=true in "addons-548360"
	I0213 21:57:54.384452   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:54.384854   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:54.384905   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:54.413216   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0213 21:57:54.413683   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:54.414184   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:54.414209   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:54.414520   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:54.415049   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:54.415089   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:54.430396   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0213 21:57:54.430880   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:54.431367   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:54.431389   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:54.431719   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:54.431923   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:54.433611   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:54.433855   16934 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0213 21:57:54.433892   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:54.437038   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.437508   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:54.437540   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.437740   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:54.437963   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:54.438158   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:54.438369   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:55.188662   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.721646525s)
	I0213 21:57:55.188709   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:55.188721   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:55.189113   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:55.189124   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:55.189131   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:55.189151   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:55.189161   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:55.189376   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:55.189389   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:55.189412   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.183469   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.680449659s)
	I0213 21:57:57.183545   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.560360593s)
	I0213 21:57:57.183575   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183578   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183591   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183594   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.715195126s)
	I0213 21:57:57.183620   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183503   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.579071088s)
	I0213 21:57:57.183637   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183649   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183651   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.527634671s)
	I0213 21:57:57.183659   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183667   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183592   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183681   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183671   16934 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (9.357981717s)
	I0213 21:57:57.183704   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.138561285s)
	I0213 21:57:57.183723   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183732   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183735   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.132013303s)
	I0213 21:57:57.183753   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183763   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184085   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184094   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184100   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184114   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184122   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184123   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184187   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184188   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184214   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184220   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184224   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184229   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184235   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184239   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184244   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184239   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184264   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184287   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184303   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184336   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184360   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184378   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184394   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184248   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.185166   16934 node_ready.go:35] waiting up to 6m0s for node "addons-548360" to be "Ready" ...
	I0213 21:57:57.185361   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185387   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185394   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.185402   16934 addons.go:470] Verifying addon registry=true in "addons-548360"
	I0213 21:57:57.188119   16934 out.go:177] * Verifying registry addon...
	I0213 21:57:57.185818   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185838   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185858   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185891   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185910   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185925   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185955   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185971   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185988   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.186008   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.187435   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.865522581s)
	I0213 21:57:57.187502   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.845558901s)
	I0213 21:57:57.187545   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.784913671s)
	I0213 21:57:57.187611   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.543481909s)
	I0213 21:57:57.187670   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.187713   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.189467   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189481   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189492   16934 addons.go:470] Verifying addon ingress=true in "addons-548360"
	I0213 21:57:57.189512   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.189525   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189533   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.191249   16934 out.go:177] * Verifying ingress addon...
	I0213 21:57:57.189515   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189589   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189593   16934 main.go:141] libmachine: Making call to close driver server
	W0213 21:57:57.189597   16934 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 21:57:57.189601   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189606   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.189859   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.189940   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.190407   16934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0213 21:57:57.192727   16934 retry.go:31] will retry after 163.08714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 21:57:57.192755   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.192764   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.192769   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192773   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192793   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192782   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.192747   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192849   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.192859   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.193059   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.193070   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.193086   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.193094   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.193706   16934 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0213 21:57:57.193966   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.193986   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194007   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194016   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194021   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194025   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194034   16934 addons.go:470] Verifying addon metrics-server=true in "addons-548360"
	I0213 21:57:57.194055   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194070   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194095   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194105   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194113   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194135   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194162   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194172   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194184   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.194191   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.194073   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194970   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.195006   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.195018   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.196901   16934 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-548360 service yakd-dashboard -n yakd-dashboard
	
	I0213 21:57:57.247881   16934 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0213 21:57:57.247906   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:57.247924   16934 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0213 21:57:57.247942   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:57.253214   16934 node_ready.go:49] node "addons-548360" has status "Ready":"True"
	I0213 21:57:57.253238   16934 node_ready.go:38] duration metric: took 68.050213ms waiting for node "addons-548360" to be "Ready" ...
	I0213 21:57:57.253247   16934 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 21:57:57.272956   16934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace to be "Ready" ...
	I0213 21:57:57.273792   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.273818   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.274200   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.274221   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.283372   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.283399   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.283712   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.283729   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.283736   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.356821   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:57.751886   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:57.754071   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:57.828318   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.05054908s)
	I0213 21:57:57.828367   16934 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.394479094s)
	I0213 21:57:57.829964   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:57.828369   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.831524   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.833206   16934 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0213 21:57:57.831838   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.831883   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.834479   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.834508   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.834520   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.834517   16934 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0213 21:57:57.834538   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0213 21:57:57.834802   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.834861   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.834875   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.834892   16934 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-548360"
	I0213 21:57:57.836472   16934 out.go:177] * Verifying csi-hostpath-driver addon...
	I0213 21:57:57.838449   16934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0213 21:57:57.942705   16934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 21:57:57.942730   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:58.202903   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:58.203280   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:58.350015   16934 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0213 21:57:58.350038   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0213 21:57:58.389433   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:58.465652   16934 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 21:57:58.465673   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0213 21:57:58.513125   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 21:57:58.703609   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:58.708801   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:58.886416   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:59.229280   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:59.229388   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:59.477299   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:59.523584   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:57:59.724203   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:59.725189   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:59.874334   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:00.215111   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:00.215864   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:00.361719   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:00.740344   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:00.740964   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:00.833715   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.476828323s)
	I0213 21:58:00.833785   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:00.833797   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:00.834211   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:00.834235   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:00.834246   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:00.834256   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:00.834211   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:58:00.834497   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:58:00.834559   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:00.834579   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:00.849517   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:01.228509   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:01.233861   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:01.321447   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.808278554s)
	I0213 21:58:01.321512   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.321526   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:01.321781   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.321806   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.321816   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.321824   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:01.322265   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.322280   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.322283   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:58:01.324584   16934 addons.go:470] Verifying addon gcp-auth=true in "addons-548360"
	I0213 21:58:01.326166   16934 out.go:177] * Verifying gcp-auth addon...
	I0213 21:58:01.328887   16934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0213 21:58:01.340329   16934 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0213 21:58:01.340354   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:01.353522   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:01.700118   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:01.700891   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:01.783604   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:01.836616   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:01.858743   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:02.212067   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.218210   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:02.340125   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:02.358420   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:02.702335   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:02.702712   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.834849   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:02.852913   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:03.208098   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.208103   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:03.332856   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:03.346363   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:03.700660   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:03.702315   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.834353   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:03.846072   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:04.201526   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:04.201986   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:04.304230   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:04.336752   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:04.360021   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:04.699665   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:04.704701   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:04.860057   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:04.860700   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:05.214363   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:05.216095   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:05.333009   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:05.347438   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:05.711108   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:05.711487   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:05.833800   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:05.848191   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:06.198656   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:06.200643   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:06.335935   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:06.344046   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:06.703116   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:06.703234   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:06.785163   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:06.833994   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:06.847873   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:07.198764   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:07.210929   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:07.333686   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:07.345188   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:07.700061   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:07.700346   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:07.834681   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:07.870429   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:08.349075   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:08.350349   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:08.350919   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:08.351069   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:08.708917   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:08.709899   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:08.786493   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:08.838999   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:08.845127   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:09.200107   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:09.201246   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:09.335540   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:09.344400   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:09.699941   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:09.700420   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:09.833132   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:09.845549   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:10.219944   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:10.220183   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:10.332941   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:10.344534   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:10.698002   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:10.699732   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:10.835853   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:10.846454   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:11.201721   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:11.204181   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:11.279842   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:11.333832   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:11.345773   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:11.699760   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:11.701420   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:11.832699   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:11.857621   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:12.219290   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:12.219436   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:12.333315   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:12.349937   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:12.702512   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:12.715014   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.102504   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.115059   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:13.200907   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:13.201243   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.281646   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:13.334486   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.356803   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:13.700144   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.700184   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:13.834035   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.845253   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:14.199485   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:14.199706   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:14.333694   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:14.344900   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:14.705214   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:14.707008   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:14.834291   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:14.845172   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.199603   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:15.202909   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:15.282604   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:15.334057   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:15.346190   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.698256   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:15.699481   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:15.883703   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.885156   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.198539   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:16.200784   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:16.333507   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.351847   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:16.732340   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:16.732670   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:16.836054   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.861095   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.488184   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:17.488330   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:17.488391   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:17.490375   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.498128   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:17.699304   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:17.701133   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:17.833645   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:17.849555   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:18.199689   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.201202   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:18.333382   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:18.347584   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:18.699820   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:18.701140   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.833511   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:18.845318   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:19.198924   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:19.199373   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:19.333948   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:19.345386   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:19.699097   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:19.699654   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:19.779904   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:19.833625   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:19.845946   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:20.198712   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:20.198988   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:20.332882   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:20.344380   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:20.699319   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:20.699856   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:20.833660   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:20.847341   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:21.199778   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:21.200378   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:21.333697   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:21.343901   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:21.835764   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:21.836061   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:21.840469   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:21.840615   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:21.845826   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:22.198371   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:22.201260   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:22.335345   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:22.353231   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:22.698434   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:22.698787   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:22.833767   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:22.845032   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:23.201216   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:23.201290   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:23.333017   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:23.345694   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:23.699102   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:23.700202   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:23.834595   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:23.864500   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:24.198306   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:24.199153   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:24.281399   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:24.333615   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:24.348722   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:24.699050   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:24.705649   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:24.833288   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:24.851799   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:25.198852   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:25.202650   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:25.334891   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:25.343993   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:25.698900   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:25.703040   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:25.833249   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:25.848363   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:26.199412   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:26.199549   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:26.333590   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:26.347914   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:26.698750   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:26.699295   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:26.783436   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:26.833424   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:26.845424   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:27.198326   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:27.203981   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:27.333460   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:27.350566   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:27.699128   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:27.705033   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:27.835532   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:27.848615   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:28.204226   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:28.204386   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:28.334348   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:28.344312   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.084980   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.085519   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.085895   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.093996   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.111280   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:29.203234   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.204700   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.333500   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.351822   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.697795   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.700197   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.834643   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.844847   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.199292   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:30.200866   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:30.333359   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:30.344521   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.698722   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:30.698908   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:30.793111   16934 pod_ready.go:92] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.793135   16934 pod_ready.go:81] duration metric: took 33.520147005s waiting for pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.793143   16934 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.799105   16934 pod_ready.go:92] pod "etcd-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.799138   16934 pod_ready.go:81] duration metric: took 5.988013ms waiting for pod "etcd-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.799147   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.808659   16934 pod_ready.go:92] pod "kube-apiserver-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.808678   16934 pod_ready.go:81] duration metric: took 9.525583ms waiting for pod "kube-apiserver-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.808687   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.817804   16934 pod_ready.go:92] pod "kube-controller-manager-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.817831   16934 pod_ready.go:81] duration metric: took 9.136825ms waiting for pod "kube-controller-manager-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.817848   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkr4l" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.823597   16934 pod_ready.go:92] pod "kube-proxy-gkr4l" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.823622   16934 pod_ready.go:81] duration metric: took 5.766025ms waiting for pod "kube-proxy-gkr4l" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.823633   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.832480   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:30.844535   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:31.178077   16934 pod_ready.go:92] pod "kube-scheduler-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:31.178098   16934 pod_ready.go:81] duration metric: took 354.457599ms waiting for pod "kube-scheduler-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:31.178108   16934 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:31.197489   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:31.199237   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:31.333145   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:31.344419   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:31.702538   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:31.705733   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:31.836603   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:31.873088   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:32.197701   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:32.198640   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:32.335929   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:32.343718   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:32.699856   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:32.700189   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:32.833417   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:32.844800   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:33.186031   16934 pod_ready.go:102] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:33.198776   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:33.202827   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.333284   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.344120   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:33.698860   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.700345   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:33.836067   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.849554   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:34.331262   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:34.334236   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:34.335964   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:34.343546   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:34.705550   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:34.708017   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:34.834815   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:34.845274   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:35.191157   16934 pod_ready.go:102] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:35.205241   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:35.207918   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:35.333315   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:35.348217   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:35.709680   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:35.715909   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:35.836245   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:35.872134   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:36.206766   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:36.213491   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:36.333647   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:36.378766   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:36.730508   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:36.739411   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:36.843193   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:36.859920   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:37.198337   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:37.201862   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:37.338604   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:37.385197   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:37.690740   16934 pod_ready.go:102] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:37.700030   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:37.711768   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:37.833487   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:37.847399   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.198290   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:38.200246   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:38.351665   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:38.352330   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.730522   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:38.734536   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:38.824769   16934 pod_ready.go:92] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:38.824798   16934 pod_ready.go:81] duration metric: took 7.646684168s waiting for pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:38.824809   16934 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mhcwx" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:38.839634   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:38.841561   16934 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mhcwx" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:38.841583   16934 pod_ready.go:81] duration metric: took 16.766832ms waiting for pod "nvidia-device-plugin-daemonset-mhcwx" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:38.841605   16934 pod_ready.go:38] duration metric: took 41.5883375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 21:58:38.841624   16934 api_server.go:52] waiting for apiserver process to appear ...
	I0213 21:58:38.841682   16934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 21:58:38.852182   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.932262   16934 api_server.go:72] duration metric: took 51.109583568s to wait for apiserver process to appear ...
	I0213 21:58:38.932292   16934 api_server.go:88] waiting for apiserver healthz status ...
	I0213 21:58:38.932319   16934 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0213 21:58:38.937752   16934 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0213 21:58:38.939365   16934 api_server.go:141] control plane version: v1.28.4
	I0213 21:58:38.939388   16934 api_server.go:131] duration metric: took 7.089518ms to wait for apiserver health ...
	I0213 21:58:38.939396   16934 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 21:58:38.961653   16934 system_pods.go:59] 18 kube-system pods found
	I0213 21:58:38.961702   16934 system_pods.go:61] "coredns-5dd5756b68-hlmz9" [8da21de0-1ed2-4221-8e70-36bbe7832fe0] Running
	I0213 21:58:38.961712   16934 system_pods.go:61] "csi-hostpath-attacher-0" [f3d05280-dffc-4b3e-87af-241451cc1cdc] Running
	I0213 21:58:38.961719   16934 system_pods.go:61] "csi-hostpath-resizer-0" [6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088] Running
	I0213 21:58:38.961731   16934 system_pods.go:61] "csi-hostpathplugin-f89wf" [4a792c70-a32f-4608-98ec-26b9c817b4f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0213 21:58:38.961743   16934 system_pods.go:61] "etcd-addons-548360" [bb102c25-39b6-4c8f-89ed-2325429ec12c] Running
	I0213 21:58:38.961755   16934 system_pods.go:61] "kube-apiserver-addons-548360" [e8714aaf-bf32-4429-be1c-67c8f3156cc9] Running
	I0213 21:58:38.961765   16934 system_pods.go:61] "kube-controller-manager-addons-548360" [eee31965-d4a3-4c21-ad11-48490702b453] Running
	I0213 21:58:38.961773   16934 system_pods.go:61] "kube-ingress-dns-minikube" [f1e93909-d75e-4377-be18-60377f7ce06d] Running
	I0213 21:58:38.961782   16934 system_pods.go:61] "kube-proxy-gkr4l" [2ea7ce55-faee-4a44-a16d-98788c2932b6] Running
	I0213 21:58:38.961792   16934 system_pods.go:61] "kube-scheduler-addons-548360" [48e6baab-2960-4701-88b0-43e9c88c673c] Running
	I0213 21:58:38.961804   16934 system_pods.go:61] "metrics-server-69cf46c98-ghxhg" [723e578e-19de-4bcf-86ed-9de4ffbe5650] Running
	I0213 21:58:38.961814   16934 system_pods.go:61] "nvidia-device-plugin-daemonset-mhcwx" [b9eec8df-b97e-4c67-9916-c51b3600b54b] Running
	I0213 21:58:38.961935   16934 system_pods.go:61] "registry-75mmv" [a146cfb0-9524-40f7-8bab-91a56de079a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0213 21:58:38.962096   16934 system_pods.go:61] "registry-proxy-mfshx" [dad71134-5cc3-4fa4-b391-4a08b89d5d04] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0213 21:58:38.962115   16934 system_pods.go:61] "snapshot-controller-58dbcc7b99-56xxb" [a8d47014-172e-4559-816c-97635f87860a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.962134   16934 system_pods.go:61] "snapshot-controller-58dbcc7b99-8pfd2" [6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.962142   16934 system_pods.go:61] "storage-provisioner" [71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f] Running
	I0213 21:58:38.962155   16934 system_pods.go:61] "tiller-deploy-7b677967b9-jn92b" [2a63d83e-5212-4e3e-9e40-0e87c7d8a741] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0213 21:58:38.962167   16934 system_pods.go:74] duration metric: took 22.764784ms to wait for pod list to return data ...
	I0213 21:58:38.962183   16934 default_sa.go:34] waiting for default service account to be created ...
	I0213 21:58:38.979920   16934 default_sa.go:45] found service account: "default"
	I0213 21:58:38.979949   16934 default_sa.go:55] duration metric: took 17.758442ms for default service account to be created ...
	I0213 21:58:38.979960   16934 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 21:58:38.997511   16934 system_pods.go:86] 18 kube-system pods found
	I0213 21:58:38.997552   16934 system_pods.go:89] "coredns-5dd5756b68-hlmz9" [8da21de0-1ed2-4221-8e70-36bbe7832fe0] Running
	I0213 21:58:38.997560   16934 system_pods.go:89] "csi-hostpath-attacher-0" [f3d05280-dffc-4b3e-87af-241451cc1cdc] Running
	I0213 21:58:38.997567   16934 system_pods.go:89] "csi-hostpath-resizer-0" [6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088] Running
	I0213 21:58:38.997578   16934 system_pods.go:89] "csi-hostpathplugin-f89wf" [4a792c70-a32f-4608-98ec-26b9c817b4f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0213 21:58:38.997585   16934 system_pods.go:89] "etcd-addons-548360" [bb102c25-39b6-4c8f-89ed-2325429ec12c] Running
	I0213 21:58:38.997594   16934 system_pods.go:89] "kube-apiserver-addons-548360" [e8714aaf-bf32-4429-be1c-67c8f3156cc9] Running
	I0213 21:58:38.997605   16934 system_pods.go:89] "kube-controller-manager-addons-548360" [eee31965-d4a3-4c21-ad11-48490702b453] Running
	I0213 21:58:38.997613   16934 system_pods.go:89] "kube-ingress-dns-minikube" [f1e93909-d75e-4377-be18-60377f7ce06d] Running
	I0213 21:58:38.997619   16934 system_pods.go:89] "kube-proxy-gkr4l" [2ea7ce55-faee-4a44-a16d-98788c2932b6] Running
	I0213 21:58:38.997625   16934 system_pods.go:89] "kube-scheduler-addons-548360" [48e6baab-2960-4701-88b0-43e9c88c673c] Running
	I0213 21:58:38.997631   16934 system_pods.go:89] "metrics-server-69cf46c98-ghxhg" [723e578e-19de-4bcf-86ed-9de4ffbe5650] Running
	I0213 21:58:38.997637   16934 system_pods.go:89] "nvidia-device-plugin-daemonset-mhcwx" [b9eec8df-b97e-4c67-9916-c51b3600b54b] Running
	I0213 21:58:38.997646   16934 system_pods.go:89] "registry-75mmv" [a146cfb0-9524-40f7-8bab-91a56de079a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0213 21:58:38.997654   16934 system_pods.go:89] "registry-proxy-mfshx" [dad71134-5cc3-4fa4-b391-4a08b89d5d04] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0213 21:58:38.997667   16934 system_pods.go:89] "snapshot-controller-58dbcc7b99-56xxb" [a8d47014-172e-4559-816c-97635f87860a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.997679   16934 system_pods.go:89] "snapshot-controller-58dbcc7b99-8pfd2" [6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.997685   16934 system_pods.go:89] "storage-provisioner" [71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f] Running
	I0213 21:58:38.997693   16934 system_pods.go:89] "tiller-deploy-7b677967b9-jn92b" [2a63d83e-5212-4e3e-9e40-0e87c7d8a741] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0213 21:58:38.997702   16934 system_pods.go:126] duration metric: took 17.736144ms to wait for k8s-apps to be running ...
	I0213 21:58:38.997712   16934 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 21:58:38.997766   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 21:58:39.047538   16934 system_svc.go:56] duration metric: took 49.816812ms WaitForService to wait for kubelet.
	I0213 21:58:39.047568   16934 kubeadm.go:581] duration metric: took 51.224893413s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 21:58:39.047591   16934 node_conditions.go:102] verifying NodePressure condition ...
	I0213 21:58:39.055663   16934 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 21:58:39.055696   16934 node_conditions.go:123] node cpu capacity is 2
	I0213 21:58:39.055715   16934 node_conditions.go:105] duration metric: took 8.118361ms to run NodePressure ...
	I0213 21:58:39.055728   16934 start.go:228] waiting for startup goroutines ...
	I0213 21:58:39.199611   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:39.199689   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:39.333368   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:39.345444   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:39.699315   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:39.700897   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:39.835546   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:39.853838   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:40.198863   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:40.199081   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:40.333433   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:40.344586   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:40.698485   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:40.699026   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:40.834135   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:40.848384   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:41.197926   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:41.200177   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:41.334284   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:41.346044   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:41.702323   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:41.703289   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:41.833824   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:41.853059   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:42.198225   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:42.199115   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:42.333906   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:42.344832   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:42.699072   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:42.699213   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:42.835628   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:42.844821   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:43.198627   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:43.199020   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:43.339064   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:43.344849   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:43.698908   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:43.708112   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:43.833752   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:43.855862   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:44.200853   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:44.201606   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:44.348136   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:44.351222   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:44.697778   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:44.698137   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:44.834300   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:44.845464   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:45.198532   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:45.204140   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:45.333668   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:45.345836   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:45.700935   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:45.700981   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:45.833266   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:45.847066   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:46.199424   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:46.200883   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:46.334182   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:46.344445   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:46.698364   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:46.699772   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:46.840570   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:46.850299   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:47.198870   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:47.200024   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:47.333045   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:47.344123   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:47.699984   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:47.705409   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:47.833476   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:47.844300   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:48.356674   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:48.356792   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:48.356818   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:48.361207   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:48.700433   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:48.700575   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:48.833067   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:48.850618   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:49.200197   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:49.201763   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:49.333435   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:49.344385   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:49.698596   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:49.700105   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:49.833557   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:49.849288   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:50.199594   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:50.200574   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:50.334598   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:50.347674   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:50.708469   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:50.714691   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:50.961622   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:50.996949   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:51.201124   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:51.202389   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:51.332901   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:51.348080   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:51.699801   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:51.700040   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:51.833784   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:51.847914   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:52.197764   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:52.204194   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:52.334188   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:52.344858   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:52.698707   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:52.699287   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:52.833001   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:52.844779   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:53.200520   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:53.200972   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:53.334275   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:53.351082   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:53.700761   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:53.709267   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:53.833278   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:53.844627   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:54.198580   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:54.198737   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:54.335931   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:54.349110   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:54.698536   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:54.699476   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:54.832467   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:54.845299   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:55.198479   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:55.199313   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:55.344640   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:55.355609   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:55.701906   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:55.702319   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:55.833947   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:55.851574   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:56.198070   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:56.198738   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:56.333399   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:56.345188   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:56.699685   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:56.700377   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:56.833967   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:56.852513   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:57.198865   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:57.199043   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:57.333952   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:57.344266   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:57.698833   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:57.699011   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:57.833302   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:57.850666   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:58.200920   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:58.201149   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:58.334009   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:58.344903   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:58.698320   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:58.698990   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:58.833673   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:58.848281   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:59.614032   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:59.614077   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:59.614618   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:59.614837   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:59.698350   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:59.699053   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:59.834348   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:59.848613   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:00.198412   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:59:00.200244   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:00.338735   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:00.358645   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:00.701175   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:00.701206   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:59:00.834746   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:00.845406   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:01.197612   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:59:01.201690   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:01.334352   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:01.345496   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:01.701655   16934 kapi.go:107] duration metric: took 1m4.511243495s to wait for kubernetes.io/minikube-addons=registry ...
	I0213 21:59:01.701708   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:01.833423   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:01.866125   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:02.216647   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:02.338164   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:02.346326   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:02.704917   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:02.842130   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:02.872754   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:03.211524   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:03.335417   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:03.346682   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:03.699316   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:03.833698   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:03.844915   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:04.213994   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:04.337526   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:04.345708   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:04.702859   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:04.835596   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:04.856593   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:05.216678   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:05.334092   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:05.344989   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:05.698636   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:05.842528   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:05.846972   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:06.200187   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:06.334109   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:06.344209   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:06.700687   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:06.832938   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:06.844875   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:07.214051   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:07.332922   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:07.348376   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:07.827084   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:07.838386   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:07.850091   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:08.198790   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:08.333622   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:08.350720   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:08.698235   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:08.833782   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:08.844121   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:09.202570   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:09.333385   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:09.349327   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:09.700015   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:09.833505   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:09.844393   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:10.198062   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:10.333466   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:10.346713   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:10.698313   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:10.833594   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:10.847484   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:11.198498   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:11.332564   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:11.348663   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:11.700929   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:11.833298   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:11.854885   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:12.199396   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:12.333616   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:12.356462   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:12.697902   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:12.833800   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:12.843955   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:13.200299   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:13.333610   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:13.349199   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:13.700810   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:13.867862   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:13.869817   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:14.199427   16934 kapi.go:107] duration metric: took 1m17.005721851s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0213 21:59:14.333413   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:14.346680   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:14.847182   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:14.871938   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:15.334025   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:15.344501   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:15.833591   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:15.846121   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:16.335503   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:16.346110   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:16.874581   16934 kapi.go:107] duration metric: took 1m15.545690469s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0213 21:59:16.876403   16934 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-548360 cluster.
	I0213 21:59:16.877721   16934 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0213 21:59:16.875935   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:16.879264   16934 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0213 21:59:17.354841   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:17.954916   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:18.345854   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:18.847382   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:19.345364   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:19.883032   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:20.344416   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:20.844672   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:21.345072   16934 kapi.go:107] duration metric: took 1m23.506622295s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0213 21:59:21.346924   16934 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, ingress-dns, nvidia-device-plugin, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0213 21:59:21.348333   16934 addons.go:505] enable addons completed in 1m34.317850901s: enabled=[cloud-spanner storage-provisioner helm-tiller inspektor-gadget metrics-server ingress-dns nvidia-device-plugin yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0213 21:59:21.348384   16934 start.go:233] waiting for cluster config update ...
	I0213 21:59:21.348406   16934 start.go:242] writing updated cluster config ...
	I0213 21:59:21.348659   16934 ssh_runner.go:195] Run: rm -f paused
	I0213 21:59:21.400966   16934 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 21:59:21.402762   16934 out.go:177] * Done! kubectl is now configured to use "addons-548360" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 21:56:58 UTC, ends at Tue 2024-02-13 22:02:09 UTC. --
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.688822067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b5ad657c-6549-4c43-9353-79fdce6d4030 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.690771991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4db8f40-4187-4ddd-95f5-c33d4e42cbff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.692068909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861729692050290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578414,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=d4db8f40-4187-4ddd-95f5-c33d4e42cbff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.692820968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=56c9a9a6-261e-464e-b77d-7d1b142843c3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.692927879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=56c9a9a6-261e-464e-b77d-7d1b142843c3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.693346668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9967506448caf81ef8e56c84e357722e9de25cd640188cfc2e4b5b3a9cbd2db8,PodSandboxId:9ca22f939a78b6b641f9ca47a60d824e8f8bdd812d842068ea264702d063619f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707861721285377225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-z2rxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd4c47fa-724b-477f-b662-681b3f368c37,},Annotations:map[string]string{io.kubernetes.container.hash: 15aad978,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b918f574fb420c5aa715240a385544c9806202920edb51e161ae256e81fb3d86,PodSandboxId:5fd4cebc7553c106ea7fbf9dd226747a0f15592aefd92204dd738398d1e16637,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1707861591976491404,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-q5pn7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: bd852b29-93e3-40cd-a95a-6cdad295e4e8,},An
notations:map[string]string{io.kubernetes.container.hash: dae60a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec046f87e275d929d1a4140d4f356c53b164a5e1dd04523d510ba8722096bf41,PodSandboxId:95389736c388f74d36863d6621537ea539660d8b15e94bf7a289091141c06cb9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707861583009206047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cdef0026-04e2-4f2d-a0be-076dce5a611b,},Annotations:map[string]string{io.kubernetes.container.hash: e16bf257,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df2548c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17078615
41736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d413dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha
256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759
338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2
c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c
7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middl
eware/chain.go:25" id=56c9a9a6-261e-464e-b77d-7d1b142843c3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.728077947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f99fe4be-0d6e-4665-89ae-b6725efafeaa name=/runtime.v1.RuntimeService/Version
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.728142147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f99fe4be-0d6e-4665-89ae-b6725efafeaa name=/runtime.v1.RuntimeService/Version
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.730372478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cc33db1e-1751-409d-8be0-cbe53224a5fc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.732065907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861729732047791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578414,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=cc33db1e-1751-409d-8be0-cbe53224a5fc name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.732819719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43107e7a-c1de-464f-b1d6-bf0fb52eae95 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.732923357Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43107e7a-c1de-464f-b1d6-bf0fb52eae95 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.733590882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9967506448caf81ef8e56c84e357722e9de25cd640188cfc2e4b5b3a9cbd2db8,PodSandboxId:9ca22f939a78b6b641f9ca47a60d824e8f8bdd812d842068ea264702d063619f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707861721285377225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-z2rxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd4c47fa-724b-477f-b662-681b3f368c37,},Annotations:map[string]string{io.kubernetes.container.hash: 15aad978,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b918f574fb420c5aa715240a385544c9806202920edb51e161ae256e81fb3d86,PodSandboxId:5fd4cebc7553c106ea7fbf9dd226747a0f15592aefd92204dd738398d1e16637,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1707861591976491404,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-q5pn7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: bd852b29-93e3-40cd-a95a-6cdad295e4e8,},An
notations:map[string]string{io.kubernetes.container.hash: dae60a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec046f87e275d929d1a4140d4f356c53b164a5e1dd04523d510ba8722096bf41,PodSandboxId:95389736c388f74d36863d6621537ea539660d8b15e94bf7a289091141c06cb9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707861583009206047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cdef0026-04e2-4f2d-a0be-076dce5a611b,},Annotations:map[string]string{io.kubernetes.container.hash: e16bf257,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df2548c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17078615
41736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d413dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha
256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759
338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2
c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c
7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middl
eware/chain.go:25" id=43107e7a-c1de-464f-b1d6-bf0fb52eae95 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.755664103Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=010b942e-9441-4341-a80c-a649b7886d2e name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.757090922Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9ca22f939a78b6b641f9ca47a60d824e8f8bdd812d842068ea264702d063619f,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-z2rxt,Uid:fd4c47fa-724b-477f-b662-681b3f368c37,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861718869901004,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-z2rxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd4c47fa-724b-477f-b662-681b3f368c37,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T22:01:58.530329731Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5fd4cebc7553c106ea7fbf9dd226747a0f15592aefd92204dd738398d1e16637,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-q5pn7,Uid:bd852b29-93e3-40cd-a95a-6cdad295e4e8,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861583564673347,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-q5pn7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: bd852b29-93e3-40cd-a95a-6cdad295e4e8,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:59:43.209638372Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95389736c388f74d36863d6621537ea539660d8b15e94bf7a289091141c06cb9,Metadata:&PodSandboxMetadata{Name:nginx,Uid:cdef0026-04e2-4f2d-a0be-076dce5a611b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861578611209598,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cdef0026-04e2-4f2d-a0be-076dce5a611b,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
02-13T21:59:38.275120919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&PodSandboxMetadata{Name:gcp-auth-d4c87556c-j7fcp,Uid:8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861546694553591,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:58:01.257102377Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1857afa97fb8b12b88f072a9b377c56c3c343cb88f1a5bf9146147b293d116af,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-69cff4fd79-9cp25,Uid:b35307cf-04bb-45d3-9312-e76f538fda2f,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTRE
ADY,CreatedAt:1707861541354587801,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-69cff4fd79-9cp25,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b35307cf-04bb-45d3-9312-e76f538fda2f,pod-template-hash: 69cff4fd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:57:56.902127203Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6df2548c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-cjclt,Uid:88643b72-5b51-4217-942a-f286ddf52cd0,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1707861478164139169,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kube
rnetes.io/controller-uid: a6f870a9-e37d-47ee-844d-d8d52e3bdff6,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: a6f870a9-e37d-47ee-844d-d8d52e3bdff6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:57:56.975935925Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-xfjh9,Uid:f49049d6-6ca3-4b61-be2a-867c087fa990,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1707861477836604429,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid:
177f7ead-95ea-4a47-a24f-9a7d39b6c4ba,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 177f7ead-95ea-4a47-a24f-9a7d39b6c4ba,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:57:56.973237488Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-gmcgl,Uid:a1dd1624-7e81-4306-8d34-c020ef448cac,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861477222221809,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
gmcgl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:57:55.502713291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861475583231870,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mo
de\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-13T21:57:55.239880387Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4edb48eb984f3d17c86e036476c120292a3c8d4c0dccd140de26d9b8176708d4,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:f1e93909-d75e-4377-be18-60377f7ce06d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1707861474636122605,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e93909-d75e-4377-be18-60377f7ce06d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-02
-13T21:57:54.000686847Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&PodSandboxMetadata{Name:cloud-spanner-emulator-64c8c85f65-bwgbm,Uid:92006ffc-89c1-4ab2-9676-94b45895f5f9,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861473381262659,Labels:map[string]string{app: cloud-spanner-emulator,io.kubernetes.container.name: POD,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,pod-template-hash: 64c8c85f65,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:57:52.743534635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-hlmz9,Uid:8da21de0-1ed2-4221-8e70-36bbe7832fe0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861
470878049659,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T21:57:48.897836647Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a8b08e56b70486051d8992b8cdb759338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&PodSandboxMetadata{Name:kube-proxy-gkr4l,Uid:2ea7ce55-faee-4a44-a16d-98788c2932b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861469381644365,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T
21:57:47.551145668Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-548360,Uid:976c886cb6512aaac367cb4d1401aa5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861445256136932,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: 976c886cb6512aaac367cb4d1401aa5e,kubernetes.io/config.seen: 2024-02-13T21:57:24.698732934Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-548360,
Uid:753c82c7870ea31d4181fa744c6910e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861445205323701,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 753c82c7870ea31d4181fa744c6910e0,kubernetes.io/config.seen: 2024-02-13T21:57:24.698735219Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&PodSandboxMetadata{Name:etcd-addons-548360,Uid:b08a2489c6cce9f7b96056c2d8c264f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861445193470701,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489
c6cce9f7b96056c2d8c264f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: b08a2489c6cce9f7b96056c2d8c264f4,kubernetes.io/config.seen: 2024-02-13T21:57:24.698727903Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-548360,Uid:7b0b9fef824614c7e96285e9f2336030,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707861445189586048,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7b0b9fef824614c7e96285e9f2336030,kubernetes.io/config.seen: 2024-02-13T21:57:24.698734111Z,k
ubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=010b942e-9441-4341-a80c-a649b7886d2e name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.764967357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dc0af5be-c243-4e16-8fc6-f7022d6f4630 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.765318980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dc0af5be-c243-4e16-8fc6-f7022d6f4630 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.767266754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9967506448caf81ef8e56c84e357722e9de25cd640188cfc2e4b5b3a9cbd2db8,PodSandboxId:9ca22f939a78b6b641f9ca47a60d824e8f8bdd812d842068ea264702d063619f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707861721285377225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-z2rxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd4c47fa-724b-477f-b662-681b3f368c37,},Annotations:map[string]string{io.kubernetes.container.hash: 15aad978,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b918f574fb420c5aa715240a385544c9806202920edb51e161ae256e81fb3d86,PodSandboxId:5fd4cebc7553c106ea7fbf9dd226747a0f15592aefd92204dd738398d1e16637,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1707861591976491404,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-q5pn7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: bd852b29-93e3-40cd-a95a-6cdad295e4e8,},An
notations:map[string]string{io.kubernetes.container.hash: dae60a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec046f87e275d929d1a4140d4f356c53b164a5e1dd04523d510ba8722096bf41,PodSandboxId:95389736c388f74d36863d6621537ea539660d8b15e94bf7a289091141c06cb9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707861583009206047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cdef0026-04e2-4f2d-a0be-076dce5a611b,},Annotations:map[string]string{io.kubernetes.container.hash: e16bf257,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df2548c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17078615
41736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d413dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha
256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759
338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2
c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c
7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middl
eware/chain.go:25" id=dc0af5be-c243-4e16-8fc6-f7022d6f4630 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.785257352Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fb1535d6-9d9d-4170-b5d7-b9b532af6e27 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.785354130Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fb1535d6-9d9d-4170-b5d7-b9b532af6e27 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.788024729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3de4bd27-8739-407d-b347-ab4d96b8311c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.792984455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861729792778870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:578414,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=3de4bd27-8739-407d-b347-ab4d96b8311c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.794191763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=52701378-00a6-4884-a407-02b78859103f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.794252550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=52701378-00a6-4884-a407-02b78859103f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:02:09 addons-548360 crio[716]: time="2024-02-13 22:02:09.794725475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9967506448caf81ef8e56c84e357722e9de25cd640188cfc2e4b5b3a9cbd2db8,PodSandboxId:9ca22f939a78b6b641f9ca47a60d824e8f8bdd812d842068ea264702d063619f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707861721285377225,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-z2rxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fd4c47fa-724b-477f-b662-681b3f368c37,},Annotations:map[string]string{io.kubernetes.container.hash: 15aad978,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b918f574fb420c5aa715240a385544c9806202920edb51e161ae256e81fb3d86,PodSandboxId:5fd4cebc7553c106ea7fbf9dd226747a0f15592aefd92204dd738398d1e16637,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1707861591976491404,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-q5pn7,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: bd852b29-93e3-40cd-a95a-6cdad295e4e8,},An
notations:map[string]string{io.kubernetes.container.hash: dae60a71,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec046f87e275d929d1a4140d4f356c53b164a5e1dd04523d510ba8722096bf41,PodSandboxId:95389736c388f74d36863d6621537ea539660d8b15e94bf7a289091141c06cb9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707861583009206047,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: cdef0026-04e2-4f2d-a0be-076dce5a611b,},Annotations:map[string]string{io.kubernetes.container.hash: e16bf257,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:map[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df2548c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17078615
41736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d413dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha
256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt
:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759
338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,M
etadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2
c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c
7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middl
eware/chain.go:25" id=52701378-00a6-4884-a407-02b78859103f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9967506448caf       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   9ca22f939a78b       hello-world-app-5d77478584-z2rxt
	b918f574fb420       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   5fd4cebc7553c       headlamp-7ddfbb94ff-q5pn7
	ec046f87e275d       docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027                              2 minutes ago       Running             nginx                     0                   95389736c388f       nginx
	a5d5509c8a837       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   950e8a5fbeda8       gcp-auth-d4c87556c-j7fcp
	fa131601c5a65       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     2                   6df2548c025b0       ingress-nginx-admission-patch-cjclt
	d413dfe614824       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   038f551356f39       ingress-nginx-admission-create-xfjh9
	4816e9514181c       gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49               3 minutes ago       Running             cloud-spanner-emulator    0                   7e00a6ef70716       cloud-spanner-emulator-64c8c85f65-bwgbm
	7ee2992936784       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   c3d217f26dc99       storage-provisioner
	0fc3d94fc7df6       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   0917fb0b972a3       yakd-dashboard-9947fc6bf-gmcgl
	15b39f73e0d38       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   771a218b8b1a7       coredns-5dd5756b68-hlmz9
	0192c0afa2f2c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   9a8b08e56b704       kube-proxy-gkr4l
	40244ef5b414f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   a0279864e9eb6       kube-scheduler-addons-548360
	8931f587c17a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   c57f992e5723d       etcd-addons-548360
	2482964b1a599       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   50ba5dbb8ab18       kube-apiserver-addons-548360
	d2b3356ee37bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   31e157eb2fd3e       kube-controller-manager-addons-548360
	
	
	==> coredns [15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f] <==
	[INFO] 10.244.0.8:48378 - 8943 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000174407s
	[INFO] 10.244.0.8:50904 - 9396 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126394s
	[INFO] 10.244.0.8:50904 - 62902 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082608s
	[INFO] 10.244.0.8:38606 - 12576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101778s
	[INFO] 10.244.0.8:38606 - 31522 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000512816s
	[INFO] 10.244.0.8:45046 - 2380 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000148576s
	[INFO] 10.244.0.8:45046 - 35394 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000065633s
	[INFO] 10.244.0.8:59262 - 29699 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089241s
	[INFO] 10.244.0.8:59262 - 31806 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042554s
	[INFO] 10.244.0.8:42648 - 23391 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034886s
	[INFO] 10.244.0.8:42648 - 7265 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032557s
	[INFO] 10.244.0.8:41361 - 17775 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046609s
	[INFO] 10.244.0.8:41361 - 36193 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043624s
	[INFO] 10.244.0.8:56881 - 64064 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000037926s
	[INFO] 10.244.0.8:56881 - 32577 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049081s
	[INFO] 10.244.0.21:37152 - 18992 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276141s
	[INFO] 10.244.0.21:39878 - 2314 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000352264s
	[INFO] 10.244.0.21:54655 - 40748 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000240623s
	[INFO] 10.244.0.21:37954 - 36720 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077009s
	[INFO] 10.244.0.21:56454 - 34022 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066434s
	[INFO] 10.244.0.21:40796 - 51297 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000053638s
	[INFO] 10.244.0.21:39708 - 12368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000765668s
	[INFO] 10.244.0.21:46744 - 14520 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.001243228s
	[INFO] 10.244.0.22:59197 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000461553s
	[INFO] 10.244.0.22:43708 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000185532s
	
	
	==> describe nodes <==
	Name:               addons-548360
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-548360
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=addons-548360
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T21_57_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-548360
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 21:57:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-548360
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:01:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:00:39 +0000   Tue, 13 Feb 2024 21:57:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:00:39 +0000   Tue, 13 Feb 2024 21:57:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:00:39 +0000   Tue, 13 Feb 2024 21:57:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:00:39 +0000   Tue, 13 Feb 2024 21:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    addons-548360
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ddcf3dbe0b24be5a4bc22610392b9da
	  System UUID:                3ddcf3db-e0b2-4be5-a4bc-22610392b9da
	  Boot ID:                    62459984-65af-4c5d-860c-ddc2dcffdbef
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-bwgbm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  default                     hello-world-app-5d77478584-z2rxt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-d4c87556c-j7fcp                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  headlamp                    headlamp-7ddfbb94ff-q5pn7                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 coredns-5dd5756b68-hlmz9                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m22s
	  kube-system                 etcd-addons-548360                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-apiserver-addons-548360               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-controller-manager-addons-548360      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 kube-proxy-gkr4l                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-scheduler-addons-548360               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gmcgl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m46s)  kubelet          Node addons-548360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m46s)  kubelet          Node addons-548360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m46s)  kubelet          Node addons-548360 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s                  kubelet          Node addons-548360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s                  kubelet          Node addons-548360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s                  kubelet          Node addons-548360 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m36s                  kubelet          Node addons-548360 status is now: NodeReady
	  Normal  RegisteredNode           4m24s                  node-controller  Node addons-548360 event: Registered Node addons-548360 in Controller
	
	
	==> dmesg <==
	[  +0.150386] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Feb13 21:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.917003] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.112763] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.151138] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.110894] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.218069] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.823024] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[ +10.251975] systemd-fstab-generator[1243]: Ignoring "noauto" for root device
	[ +21.897547] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 21:58] kauditd_printk_skb: 35 callbacks suppressed
	[ +24.955625] kauditd_printk_skb: 16 callbacks suppressed
	[ +16.954485] kauditd_printk_skb: 16 callbacks suppressed
	[Feb13 21:59] kauditd_printk_skb: 34 callbacks suppressed
	[ +21.540681] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.207117] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.200495] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.609455] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.569524] kauditd_printk_skb: 6 callbacks suppressed
	[Feb13 22:00] kauditd_printk_skb: 11 callbacks suppressed
	[Feb13 22:01] kauditd_printk_skb: 12 callbacks suppressed
	[Feb13 22:02] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7] <==
	{"level":"warn","ts":"2024-02-13T21:58:59.602356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:58:59.1923Z","time spent":"410.049997ms","remote":"127.0.0.1:56774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14054,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-02-13T21:58:59.602815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.476495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82283"}
	{"level":"info","ts":"2024-02-13T21:58:59.602874Z","caller":"traceutil/trace.go:171","msg":"trace[1467028126] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1096; }","duration":"265.541409ms","start":"2024-02-13T21:58:59.337325Z","end":"2024-02-13T21:58:59.602866Z","steps":["trace[1467028126] 'agreement among raft nodes before linearized reading'  (duration: 265.178369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:58:59.603033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.746919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11250"}
	{"level":"info","ts":"2024-02-13T21:58:59.603056Z","caller":"traceutil/trace.go:171","msg":"trace[1154841573] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1096; }","duration":"274.772585ms","start":"2024-02-13T21:58:59.328276Z","end":"2024-02-13T21:58:59.603049Z","steps":["trace[1154841573] 'agreement among raft nodes before linearized reading'  (duration: 274.711098ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:07.812476Z","caller":"traceutil/trace.go:171","msg":"trace[2024106671] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"190.697402ms","start":"2024-02-13T21:59:07.621763Z","end":"2024-02-13T21:59:07.812461Z","steps":["trace[2024106671] 'process raft request'  (duration: 190.555882ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:07.820547Z","caller":"traceutil/trace.go:171","msg":"trace[646463291] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"181.380432ms","start":"2024-02-13T21:59:07.639153Z","end":"2024-02-13T21:59:07.820533Z","steps":["trace[646463291] 'process raft request'  (duration: 180.910336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:07.821626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.263213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-02-13T21:59:07.821662Z","caller":"traceutil/trace.go:171","msg":"trace[446632817] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1163; }","duration":"128.340898ms","start":"2024-02-13T21:59:07.693313Z","end":"2024-02-13T21:59:07.821654Z","steps":["trace[446632817] 'agreement among raft nodes before linearized reading'  (duration: 128.207149ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:07.830637Z","caller":"traceutil/trace.go:171","msg":"trace[514868235] linearizableReadLoop","detail":"{readStateIndex:1195; appliedIndex:1193; }","duration":"126.871603ms","start":"2024-02-13T21:59:07.693336Z","end":"2024-02-13T21:59:07.820208Z","steps":["trace[514868235] 'read index received'  (duration: 118.892765ms)","trace[514868235] 'applied index is now lower than readState.Index'  (duration: 7.977994ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-13T21:59:12.072317Z","caller":"traceutil/trace.go:171","msg":"trace[736329166] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"180.54658ms","start":"2024-02-13T21:59:11.891749Z","end":"2024-02-13T21:59:12.072296Z","steps":["trace[736329166] 'process raft request'  (duration: 179.479713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:17.947817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.888294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82374"}
	{"level":"info","ts":"2024-02-13T21:59:17.947897Z","caller":"traceutil/trace.go:171","msg":"trace[659305666] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1212; }","duration":"109.986646ms","start":"2024-02-13T21:59:17.837899Z","end":"2024-02-13T21:59:17.947886Z","steps":["trace[659305666] 'range keys from in-memory index tree'  (duration: 109.698927ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:36.959714Z","caller":"traceutil/trace.go:171","msg":"trace[912198313] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1355; }","duration":"121.003672ms","start":"2024-02-13T21:59:36.838699Z","end":"2024-02-13T21:59:36.959703Z","steps":["trace[912198313] 'process raft request'  (duration: 120.529885ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:36.959612Z","caller":"traceutil/trace.go:171","msg":"trace[2020295862] linearizableReadLoop","detail":"{readStateIndex:1395; appliedIndex:1394; }","duration":"111.727126ms","start":"2024-02-13T21:59:36.847642Z","end":"2024-02-13T21:59:36.959369Z","steps":["trace[2020295862] 'read index received'  (duration: 111.538356ms)","trace[2020295862] 'applied index is now lower than readState.Index'  (duration: 187.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T21:59:36.960201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.522936ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T21:59:36.960322Z","caller":"traceutil/trace.go:171","msg":"trace[896758103] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1355; }","duration":"112.692499ms","start":"2024-02-13T21:59:36.847616Z","end":"2024-02-13T21:59:36.960309Z","steps":["trace[896758103] 'agreement among raft nodes before linearized reading'  (duration: 112.137065ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:50.133282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.365899ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17437249069802489562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/persistentvolumes/pvc-94c1659d-c197-459f-ae81-0c70edc6f082\" mod_revision:1412 > success:<request_put:<key:\"/registry/persistentvolumes/pvc-94c1659d-c197-459f-ae81-0c70edc6f082\" value_size:1161 >> failure:<request_range:<key:\"/registry/persistentvolumes/pvc-94c1659d-c197-459f-ae81-0c70edc6f082\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-13T21:59:50.133382Z","caller":"traceutil/trace.go:171","msg":"trace[1948142554] transaction","detail":"{read_only:false; response_revision:1493; number_of_response:1; }","duration":"437.058735ms","start":"2024-02-13T21:59:49.696311Z","end":"2024-02-13T21:59:50.13337Z","steps":["trace[1948142554] 'process raft request'  (duration: 278.523693ms)","trace[1948142554] 'compare'  (duration: 158.179484ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T21:59:50.133495Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:49.696289Z","time spent":"437.176407ms","remote":"127.0.0.1:56766","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1237,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/persistentvolumes/pvc-94c1659d-c197-459f-ae81-0c70edc6f082\" mod_revision:1412 > success:<request_put:<key:\"/registry/persistentvolumes/pvc-94c1659d-c197-459f-ae81-0c70edc6f082\" value_size:1161 >> failure:<request_range:<key:\"/registry/persistentvolumes/pvc-94c1659d-c197-459f-ae81-0c70edc6f082\" > >"}
	{"level":"info","ts":"2024-02-13T21:59:50.138333Z","caller":"traceutil/trace.go:171","msg":"trace[1039171345] transaction","detail":"{read_only:false; response_revision:1494; number_of_response:1; }","duration":"433.701462ms","start":"2024-02-13T21:59:49.704616Z","end":"2024-02-13T21:59:50.138317Z","steps":["trace[1039171345] 'process raft request'  (duration: 432.51715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:50.138844Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:59:49.702514Z","time spent":"436.202355ms","remote":"127.0.0.1:56774","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3926,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/tiller-deploy-7b677967b9-jn92b\" mod_revision:1488 > success:<request_put:<key:\"/registry/pods/kube-system/tiller-deploy-7b677967b9-jn92b\" value_size:3861 >> failure:<request_range:<key:\"/registry/pods/kube-system/tiller-deploy-7b677967b9-jn92b\" > >"}
	{"level":"info","ts":"2024-02-13T22:00:18.542789Z","caller":"traceutil/trace.go:171","msg":"trace[1819873189] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"302.550512ms","start":"2024-02-13T22:00:18.240211Z","end":"2024-02-13T22:00:18.542761Z","steps":["trace[1819873189] 'process raft request'  (duration: 302.414034ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T22:00:18.543092Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T22:00:18.240194Z","time spent":"302.711271ms","remote":"127.0.0.1:56792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-548360\" mod_revision:1599 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-548360\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-548360\" > >"}
	{"level":"info","ts":"2024-02-13T22:00:55.343316Z","caller":"traceutil/trace.go:171","msg":"trace[1003087181] transaction","detail":"{read_only:false; response_revision:1736; number_of_response:1; }","duration":"259.102531ms","start":"2024-02-13T22:00:55.084188Z","end":"2024-02-13T22:00:55.343291Z","steps":["trace[1003087181] 'process raft request'  (duration: 259.007165ms)"],"step_count":1}
	
	
	==> gcp-auth [a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b] <==
	2024/02/13 21:59:31 Ready to write response ...
	2024/02/13 21:59:32 Ready to marshal response ...
	2024/02/13 21:59:32 Ready to write response ...
	2024/02/13 21:59:33 Ready to marshal response ...
	2024/02/13 21:59:33 Ready to write response ...
	2024/02/13 21:59:34 Ready to marshal response ...
	2024/02/13 21:59:34 Ready to write response ...
	2024/02/13 21:59:38 Ready to marshal response ...
	2024/02/13 21:59:38 Ready to write response ...
	2024/02/13 21:59:42 Ready to marshal response ...
	2024/02/13 21:59:42 Ready to write response ...
	2024/02/13 21:59:42 Ready to marshal response ...
	2024/02/13 21:59:42 Ready to write response ...
	2024/02/13 21:59:43 Ready to marshal response ...
	2024/02/13 21:59:43 Ready to write response ...
	2024/02/13 21:59:43 Ready to marshal response ...
	2024/02/13 21:59:43 Ready to write response ...
	2024/02/13 21:59:50 Ready to marshal response ...
	2024/02/13 21:59:50 Ready to write response ...
	2024/02/13 22:00:11 Ready to marshal response ...
	2024/02/13 22:00:11 Ready to write response ...
	2024/02/13 22:00:50 Ready to marshal response ...
	2024/02/13 22:00:50 Ready to write response ...
	2024/02/13 22:01:58 Ready to marshal response ...
	2024/02/13 22:01:58 Ready to write response ...
	
	
	==> kernel <==
	 22:02:10 up 5 min,  0 users,  load average: 0.93, 1.90, 1.00
	Linux addons-548360 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2482964b1a599ab2c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647] <==
	I0213 21:59:30.734001       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 21:59:37.826714       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0213 21:59:38.417902       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.228.30"}
	I0213 21:59:42.870492       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.72.48"}
	E0213 22:00:06.328712       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0213 22:00:27.778371       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0213 22:00:39.817136       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0213 22:01:06.076480       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.079212       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:01:06.089230       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.089356       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:01:06.099222       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.099544       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:01:06.122123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.122192       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:01:06.128542       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.128642       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:01:06.142347       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.143872       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0213 22:01:06.172448       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0213 22:01:06.172507       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0213 22:01:07.123024       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0213 22:01:07.172698       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0213 22:01:07.199924       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0213 22:01:58.774008       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.120.66"}
	
	
	==> kube-controller-manager [d2b3356ee37bd713b4b312052b22b8c7b0acbabfc4b845f6856be0b7818dc5fd] <==
	E0213 22:01:21.798241       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:01:21.873578       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:01:21.873680       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:01:24.490246       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:01:24.490302       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:01:37.790568       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:01:37.790621       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:01:39.289499       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:01:39.289551       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:01:40.999762       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:01:40.999828       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 22:01:42.203676       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:01:42.203833       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 22:01:58.467499       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0213 22:01:58.514749       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-z2rxt"
	I0213 22:01:58.523731       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.644591ms"
	I0213 22:01:58.572469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.59476ms"
	I0213 22:01:58.572607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.557µs"
	I0213 22:02:01.649757       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0213 22:02:01.655327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="11.106µs"
	I0213 22:02:01.660814       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0213 22:02:01.769782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.977605ms"
	I0213 22:02:01.770518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="121.931µs"
	W0213 22:02:09.783182       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 22:02:09.783348       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762] <==
	I0213 21:58:05.871165       1 server_others.go:69] "Using iptables proxy"
	I0213 21:58:06.166006       1 node.go:141] Successfully retrieved node IP: 192.168.39.217
	I0213 21:58:07.481140       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 21:58:07.481186       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 21:58:07.756118       1 server_others.go:152] "Using iptables Proxier"
	I0213 21:58:07.756219       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 21:58:07.756513       1 server.go:846] "Version info" version="v1.28.4"
	I0213 21:58:07.756712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 21:58:07.867543       1 config.go:188] "Starting service config controller"
	I0213 21:58:07.867616       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 21:58:07.867657       1 config.go:97] "Starting endpoint slice config controller"
	I0213 21:58:07.867664       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 21:58:07.890498       1 config.go:315] "Starting node config controller"
	I0213 21:58:07.890593       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 21:58:08.276619       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 21:58:08.371098       1 shared_informer.go:318] Caches are synced for service config
	I0213 21:58:08.391154       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4] <==
	W0213 21:57:31.811368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:31.811567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:31.843618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 21:57:31.843674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 21:57:31.970694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 21:57:31.970845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 21:57:32.113556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 21:57:32.113688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 21:57:32.129734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 21:57:32.129934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 21:57:32.203635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:32.203702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:32.220353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 21:57:32.220512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 21:57:32.254286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 21:57:32.254380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 21:57:32.295786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 21:57:32.295849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 21:57:32.297714       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:32.297856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:32.305867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 21:57:32.306217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 21:57:32.327036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 21:57:32.327125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0213 21:57:33.936970       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 21:56:58 UTC, ends at Tue 2024-02-13 22:02:10 UTC. --
	Feb 13 22:01:58 addons-548360 kubelet[1250]: I0213 22:01:58.531066    1250 memory_manager.go:346] "RemoveStaleState removing state" podUID="a8d47014-172e-4559-816c-97635f87860a" containerName="volume-snapshot-controller"
	Feb 13 22:01:58 addons-548360 kubelet[1250]: I0213 22:01:58.576740    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8c9c\" (UniqueName: \"kubernetes.io/projected/fd4c47fa-724b-477f-b662-681b3f368c37-kube-api-access-r8c9c\") pod \"hello-world-app-5d77478584-z2rxt\" (UID: \"fd4c47fa-724b-477f-b662-681b3f368c37\") " pod="default/hello-world-app-5d77478584-z2rxt"
	Feb 13 22:01:58 addons-548360 kubelet[1250]: I0213 22:01:58.576786    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fd4c47fa-724b-477f-b662-681b3f368c37-gcp-creds\") pod \"hello-world-app-5d77478584-z2rxt\" (UID: \"fd4c47fa-724b-477f-b662-681b3f368c37\") " pod="default/hello-world-app-5d77478584-z2rxt"
	Feb 13 22:01:59 addons-548360 kubelet[1250]: I0213 22:01:59.987674    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t26wv\" (UniqueName: \"kubernetes.io/projected/f1e93909-d75e-4377-be18-60377f7ce06d-kube-api-access-t26wv\") pod \"f1e93909-d75e-4377-be18-60377f7ce06d\" (UID: \"f1e93909-d75e-4377-be18-60377f7ce06d\") "
	Feb 13 22:01:59 addons-548360 kubelet[1250]: I0213 22:01:59.992652    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e93909-d75e-4377-be18-60377f7ce06d-kube-api-access-t26wv" (OuterVolumeSpecName: "kube-api-access-t26wv") pod "f1e93909-d75e-4377-be18-60377f7ce06d" (UID: "f1e93909-d75e-4377-be18-60377f7ce06d"). InnerVolumeSpecName "kube-api-access-t26wv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:02:00 addons-548360 kubelet[1250]: I0213 22:02:00.089023    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-t26wv\" (UniqueName: \"kubernetes.io/projected/f1e93909-d75e-4377-be18-60377f7ce06d-kube-api-access-t26wv\") on node \"addons-548360\" DevicePath \"\""
	Feb 13 22:02:00 addons-548360 kubelet[1250]: I0213 22:02:00.705131    1250 scope.go:117] "RemoveContainer" containerID="02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35"
	Feb 13 22:02:00 addons-548360 kubelet[1250]: I0213 22:02:00.860202    1250 scope.go:117] "RemoveContainer" containerID="02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35"
	Feb 13 22:02:00 addons-548360 kubelet[1250]: E0213 22:02:00.865452    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35\": container with ID starting with 02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35 not found: ID does not exist" containerID="02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35"
	Feb 13 22:02:00 addons-548360 kubelet[1250]: I0213 22:02:00.865541    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35"} err="failed to get container status \"02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35\": rpc error: code = NotFound desc = could not find container \"02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35\": container with ID starting with 02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35 not found: ID does not exist"
	Feb 13 22:02:01 addons-548360 kubelet[1250]: I0213 22:02:01.747782    1250 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-z2rxt" podStartSLOduration=2.312246646 podCreationTimestamp="2024-02-13 22:01:58 +0000 UTC" firstStartedPulling="2024-02-13 22:01:59.827305928 +0000 UTC m=+265.457071159" lastFinishedPulling="2024-02-13 22:02:01.262759275 +0000 UTC m=+266.892524507" observedRunningTime="2024-02-13 22:02:01.747172335 +0000 UTC m=+267.376937585" watchObservedRunningTime="2024-02-13 22:02:01.747699994 +0000 UTC m=+267.377465225"
	Feb 13 22:02:02 addons-548360 kubelet[1250]: I0213 22:02:02.563085    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="88643b72-5b51-4217-942a-f286ddf52cd0" path="/var/lib/kubelet/pods/88643b72-5b51-4217-942a-f286ddf52cd0/volumes"
	Feb 13 22:02:02 addons-548360 kubelet[1250]: I0213 22:02:02.563665    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f1e93909-d75e-4377-be18-60377f7ce06d" path="/var/lib/kubelet/pods/f1e93909-d75e-4377-be18-60377f7ce06d/volumes"
	Feb 13 22:02:02 addons-548360 kubelet[1250]: I0213 22:02:02.564061    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f49049d6-6ca3-4b61-be2a-867c087fa990" path="/var/lib/kubelet/pods/f49049d6-6ca3-4b61-be2a-867c087fa990/volumes"
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.029669    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b35307cf-04bb-45d3-9312-e76f538fda2f-webhook-cert\") pod \"b35307cf-04bb-45d3-9312-e76f538fda2f\" (UID: \"b35307cf-04bb-45d3-9312-e76f538fda2f\") "
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.030799    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsgrn\" (UniqueName: \"kubernetes.io/projected/b35307cf-04bb-45d3-9312-e76f538fda2f-kube-api-access-tsgrn\") pod \"b35307cf-04bb-45d3-9312-e76f538fda2f\" (UID: \"b35307cf-04bb-45d3-9312-e76f538fda2f\") "
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.035028    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35307cf-04bb-45d3-9312-e76f538fda2f-kube-api-access-tsgrn" (OuterVolumeSpecName: "kube-api-access-tsgrn") pod "b35307cf-04bb-45d3-9312-e76f538fda2f" (UID: "b35307cf-04bb-45d3-9312-e76f538fda2f"). InnerVolumeSpecName "kube-api-access-tsgrn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.035141    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35307cf-04bb-45d3-9312-e76f538fda2f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b35307cf-04bb-45d3-9312-e76f538fda2f" (UID: "b35307cf-04bb-45d3-9312-e76f538fda2f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.131732    1250 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b35307cf-04bb-45d3-9312-e76f538fda2f-webhook-cert\") on node \"addons-548360\" DevicePath \"\""
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.131800    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tsgrn\" (UniqueName: \"kubernetes.io/projected/b35307cf-04bb-45d3-9312-e76f538fda2f-kube-api-access-tsgrn\") on node \"addons-548360\" DevicePath \"\""
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.737718    1250 scope.go:117] "RemoveContainer" containerID="4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72"
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.775324    1250 scope.go:117] "RemoveContainer" containerID="4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72"
	Feb 13 22:02:05 addons-548360 kubelet[1250]: E0213 22:02:05.776083    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72\": container with ID starting with 4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72 not found: ID does not exist" containerID="4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72"
	Feb 13 22:02:05 addons-548360 kubelet[1250]: I0213 22:02:05.776152    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72"} err="failed to get container status \"4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72\": rpc error: code = NotFound desc = could not find container \"4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72\": container with ID starting with 4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72 not found: ID does not exist"
	Feb 13 22:02:06 addons-548360 kubelet[1250]: I0213 22:02:06.563529    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b35307cf-04bb-45d3-9312-e76f538fda2f" path="/var/lib/kubelet/pods/b35307cf-04bb-45d3-9312-e76f538fda2f/volumes"
	
	
	==> storage-provisioner [7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d] <==
	I0213 21:58:11.141648       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 21:58:11.193917       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 21:58:11.198643       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 21:58:11.253381       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 21:58:11.253836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-548360_d6e6ba67-4e49-493b-a609-63b5d27abbe7!
	I0213 21:58:11.284595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"776e0c08-5210-4cad-a814-b6a72b9380a1", APIVersion:"v1", ResourceVersion:"887", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-548360_d6e6ba67-4e49-493b-a609-63b5d27abbe7 became leader
	I0213 21:58:11.463268       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-548360_d6e6ba67-4e49-493b-a609-63b5d27abbe7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-548360 -n addons-548360
helpers_test.go:261: (dbg) Run:  kubectl --context addons-548360 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-bwgbm" [92006ffc-89c1-4ab2-9676-94b45895f5f9] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.010102742s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-548360
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-548360: exit status 11 (550.458632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-02-13T21:59:34Z" level=error msg="stat /run/runc/b65fbcdd509ce935ca945b3ccdd82cc8c9da377fc7c9ee18747ebcb379cc8ece: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 addons disable cloud-spanner -p addons-548360" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-548360 -n addons-548360
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 logs -n 25: (3.992159564s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-236740              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-236740              | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only              | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-142558              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-142558              | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only              | download-only-452583 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-452583              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-452583              | download-only-452583 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-236740              | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-142558              | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-452583              | download-only-452583 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | --download-only -p                   | binary-mirror-720567 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | binary-mirror-720567                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46241               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-720567              | binary-mirror-720567 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| addons  | disable dashboard -p                 | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | addons-548360                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | addons-548360                        |                      |         |         |                     |                     |
	| start   | -p addons-548360 --wait=true         | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:59 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         |  --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | -p addons-548360                     |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC | 13 Feb 24 21:59 UTC |
	|         | addons-548360                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-548360        | jenkins | v1.32.0 | 13 Feb 24 21:59 UTC |                     |
	|         | addons-548360                        |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:45
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:45.651714   16934 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:45.652000   16934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:45.652010   16934 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:45.652015   16934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:45.652194   16934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 21:56:45.652814   16934 out.go:298] Setting JSON to false
	I0213 21:56:45.653621   16934 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2357,"bootTime":1707859049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:45.653676   16934 start.go:138] virtualization: kvm guest
	I0213 21:56:45.655858   16934 out.go:177] * [addons-548360] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:45.657155   16934 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 21:56:45.657167   16934 notify.go:220] Checking for updates...
	I0213 21:56:45.658359   16934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:45.659536   16934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:56:45.660744   16934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:45.661864   16934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 21:56:45.662878   16934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 21:56:45.664140   16934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:45.695796   16934 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 21:56:45.697039   16934 start.go:298] selected driver: kvm2
	I0213 21:56:45.697051   16934 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:45.697063   16934 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 21:56:45.697768   16934 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:45.697852   16934 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:45.713374   16934 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:45.713430   16934 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:45.713684   16934 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 21:56:45.713770   16934 cni.go:84] Creating CNI manager for ""
	I0213 21:56:45.713792   16934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:56:45.713807   16934 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:45.713818   16934 start_flags.go:321] config:
	{Name:addons-548360 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:45.714020   16934 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:45.716658   16934 out.go:177] * Starting control plane node addons-548360 in cluster addons-548360
	I0213 21:56:45.717848   16934 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 21:56:45.717907   16934 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 21:56:45.717921   16934 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:45.717992   16934 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 21:56:45.718002   16934 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 21:56:45.718318   16934 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/config.json ...
	I0213 21:56:45.718339   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/config.json: {Name:mk96aacdba824faa4fb9e974154f4737e39c2ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:56:45.718467   16934 start.go:365] acquiring machines lock for addons-548360: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 21:56:45.718512   16934 start.go:369] acquired machines lock for "addons-548360" in 30.357µs
	I0213 21:56:45.718530   16934 start.go:93] Provisioning new machine with config: &{Name:addons-548360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 21:56:45.718581   16934 start.go:125] createHost starting for "" (driver="kvm2")
	I0213 21:56:45.720440   16934 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0213 21:56:45.720616   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:56:45.720681   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:56:45.734221   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0213 21:56:45.734646   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:56:45.735159   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:56:45.735184   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:56:45.735492   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:56:45.735643   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:56:45.735769   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:56:45.735889   16934 start.go:159] libmachine.API.Create for "addons-548360" (driver="kvm2")
	I0213 21:56:45.735924   16934 client.go:168] LocalClient.Create starting
	I0213 21:56:45.735962   16934 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem
	I0213 21:56:45.833929   16934 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem
	I0213 21:56:45.975275   16934 main.go:141] libmachine: Running pre-create checks...
	I0213 21:56:45.975297   16934 main.go:141] libmachine: (addons-548360) Calling .PreCreateCheck
	I0213 21:56:45.975781   16934 main.go:141] libmachine: (addons-548360) Calling .GetConfigRaw
	I0213 21:56:45.976194   16934 main.go:141] libmachine: Creating machine...
	I0213 21:56:45.976209   16934 main.go:141] libmachine: (addons-548360) Calling .Create
	I0213 21:56:45.976386   16934 main.go:141] libmachine: (addons-548360) Creating KVM machine...
	I0213 21:56:45.977697   16934 main.go:141] libmachine: (addons-548360) DBG | found existing default KVM network
	I0213 21:56:45.978473   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:45.978320   16956 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f210}
	I0213 21:56:45.984965   16934 main.go:141] libmachine: (addons-548360) DBG | trying to create private KVM network mk-addons-548360 192.168.39.0/24...
	I0213 21:56:46.051525   16934 main.go:141] libmachine: (addons-548360) DBG | private KVM network mk-addons-548360 192.168.39.0/24 created
	I0213 21:56:46.051568   16934 main.go:141] libmachine: (addons-548360) Setting up store path in /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360 ...
	I0213 21:56:46.051587   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.051505   16956 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:46.051615   16934 main.go:141] libmachine: (addons-548360) Building disk image from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 21:56:46.051635   16934 main.go:141] libmachine: (addons-548360) Downloading /home/jenkins/minikube-integration/18171-8990/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0213 21:56:46.265585   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.265445   16956 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa...
	I0213 21:56:46.408080   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.407963   16956 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/addons-548360.rawdisk...
	I0213 21:56:46.408106   16934 main.go:141] libmachine: (addons-548360) DBG | Writing magic tar header
	I0213 21:56:46.408116   16934 main.go:141] libmachine: (addons-548360) DBG | Writing SSH key tar header
	I0213 21:56:46.408127   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:46.408075   16956 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360 ...
	I0213 21:56:46.408143   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360
	I0213 21:56:46.408201   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360 (perms=drwx------)
	I0213 21:56:46.408229   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines (perms=drwxr-xr-x)
	I0213 21:56:46.408239   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines
	I0213 21:56:46.408257   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:46.408273   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990
	I0213 21:56:46.408290   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0213 21:56:46.408300   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home/jenkins
	I0213 21:56:46.408317   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube (perms=drwxr-xr-x)
	I0213 21:56:46.408326   16934 main.go:141] libmachine: (addons-548360) DBG | Checking permissions on dir: /home
	I0213 21:56:46.408347   16934 main.go:141] libmachine: (addons-548360) DBG | Skipping /home - not owner
	I0213 21:56:46.408366   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990 (perms=drwxrwxr-x)
	I0213 21:56:46.408378   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0213 21:56:46.408394   16934 main.go:141] libmachine: (addons-548360) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0213 21:56:46.408408   16934 main.go:141] libmachine: (addons-548360) Creating domain...
	I0213 21:56:46.409852   16934 main.go:141] libmachine: (addons-548360) define libvirt domain using xml: 
	I0213 21:56:46.409895   16934 main.go:141] libmachine: (addons-548360) <domain type='kvm'>
	I0213 21:56:46.409908   16934 main.go:141] libmachine: (addons-548360)   <name>addons-548360</name>
	I0213 21:56:46.409917   16934 main.go:141] libmachine: (addons-548360)   <memory unit='MiB'>4000</memory>
	I0213 21:56:46.409927   16934 main.go:141] libmachine: (addons-548360)   <vcpu>2</vcpu>
	I0213 21:56:46.409936   16934 main.go:141] libmachine: (addons-548360)   <features>
	I0213 21:56:46.409944   16934 main.go:141] libmachine: (addons-548360)     <acpi/>
	I0213 21:56:46.409955   16934 main.go:141] libmachine: (addons-548360)     <apic/>
	I0213 21:56:46.409966   16934 main.go:141] libmachine: (addons-548360)     <pae/>
	I0213 21:56:46.409976   16934 main.go:141] libmachine: (addons-548360)     
	I0213 21:56:46.410009   16934 main.go:141] libmachine: (addons-548360)   </features>
	I0213 21:56:46.410037   16934 main.go:141] libmachine: (addons-548360)   <cpu mode='host-passthrough'>
	I0213 21:56:46.410051   16934 main.go:141] libmachine: (addons-548360)   
	I0213 21:56:46.410063   16934 main.go:141] libmachine: (addons-548360)   </cpu>
	I0213 21:56:46.410077   16934 main.go:141] libmachine: (addons-548360)   <os>
	I0213 21:56:46.410090   16934 main.go:141] libmachine: (addons-548360)     <type>hvm</type>
	I0213 21:56:46.410105   16934 main.go:141] libmachine: (addons-548360)     <boot dev='cdrom'/>
	I0213 21:56:46.410114   16934 main.go:141] libmachine: (addons-548360)     <boot dev='hd'/>
	I0213 21:56:46.410129   16934 main.go:141] libmachine: (addons-548360)     <bootmenu enable='no'/>
	I0213 21:56:46.410158   16934 main.go:141] libmachine: (addons-548360)   </os>
	I0213 21:56:46.410172   16934 main.go:141] libmachine: (addons-548360)   <devices>
	I0213 21:56:46.410190   16934 main.go:141] libmachine: (addons-548360)     <disk type='file' device='cdrom'>
	I0213 21:56:46.410209   16934 main.go:141] libmachine: (addons-548360)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/boot2docker.iso'/>
	I0213 21:56:46.410231   16934 main.go:141] libmachine: (addons-548360)       <target dev='hdc' bus='scsi'/>
	I0213 21:56:46.410244   16934 main.go:141] libmachine: (addons-548360)       <readonly/>
	I0213 21:56:46.410257   16934 main.go:141] libmachine: (addons-548360)     </disk>
	I0213 21:56:46.410271   16934 main.go:141] libmachine: (addons-548360)     <disk type='file' device='disk'>
	I0213 21:56:46.410295   16934 main.go:141] libmachine: (addons-548360)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0213 21:56:46.410319   16934 main.go:141] libmachine: (addons-548360)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/addons-548360.rawdisk'/>
	I0213 21:56:46.410335   16934 main.go:141] libmachine: (addons-548360)       <target dev='hda' bus='virtio'/>
	I0213 21:56:46.410346   16934 main.go:141] libmachine: (addons-548360)     </disk>
	I0213 21:56:46.410360   16934 main.go:141] libmachine: (addons-548360)     <interface type='network'>
	I0213 21:56:46.410372   16934 main.go:141] libmachine: (addons-548360)       <source network='mk-addons-548360'/>
	I0213 21:56:46.410400   16934 main.go:141] libmachine: (addons-548360)       <model type='virtio'/>
	I0213 21:56:46.410425   16934 main.go:141] libmachine: (addons-548360)     </interface>
	I0213 21:56:46.410438   16934 main.go:141] libmachine: (addons-548360)     <interface type='network'>
	I0213 21:56:46.410451   16934 main.go:141] libmachine: (addons-548360)       <source network='default'/>
	I0213 21:56:46.410465   16934 main.go:141] libmachine: (addons-548360)       <model type='virtio'/>
	I0213 21:56:46.410477   16934 main.go:141] libmachine: (addons-548360)     </interface>
	I0213 21:56:46.410490   16934 main.go:141] libmachine: (addons-548360)     <serial type='pty'>
	I0213 21:56:46.410499   16934 main.go:141] libmachine: (addons-548360)       <target port='0'/>
	I0213 21:56:46.410509   16934 main.go:141] libmachine: (addons-548360)     </serial>
	I0213 21:56:46.410514   16934 main.go:141] libmachine: (addons-548360)     <console type='pty'>
	I0213 21:56:46.410522   16934 main.go:141] libmachine: (addons-548360)       <target type='serial' port='0'/>
	I0213 21:56:46.410532   16934 main.go:141] libmachine: (addons-548360)     </console>
	I0213 21:56:46.410558   16934 main.go:141] libmachine: (addons-548360)     <rng model='virtio'>
	I0213 21:56:46.410581   16934 main.go:141] libmachine: (addons-548360)       <backend model='random'>/dev/random</backend>
	I0213 21:56:46.410597   16934 main.go:141] libmachine: (addons-548360)     </rng>
	I0213 21:56:46.410609   16934 main.go:141] libmachine: (addons-548360)     
	I0213 21:56:46.410623   16934 main.go:141] libmachine: (addons-548360)     
	I0213 21:56:46.410636   16934 main.go:141] libmachine: (addons-548360)   </devices>
	I0213 21:56:46.410650   16934 main.go:141] libmachine: (addons-548360) </domain>
	I0213 21:56:46.410665   16934 main.go:141] libmachine: (addons-548360) 
	I0213 21:56:46.415975   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:79:b1:f8 in network default
	I0213 21:56:46.416496   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:46.416516   16934 main.go:141] libmachine: (addons-548360) Ensuring networks are active...
	I0213 21:56:46.417247   16934 main.go:141] libmachine: (addons-548360) Ensuring network default is active
	I0213 21:56:46.417684   16934 main.go:141] libmachine: (addons-548360) Ensuring network mk-addons-548360 is active
	I0213 21:56:46.418274   16934 main.go:141] libmachine: (addons-548360) Getting domain xml...
	I0213 21:56:46.419012   16934 main.go:141] libmachine: (addons-548360) Creating domain...
	I0213 21:56:47.809666   16934 main.go:141] libmachine: (addons-548360) Waiting to get IP...
	I0213 21:56:47.810411   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:47.810857   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:47.810886   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:47.810827   16956 retry.go:31] will retry after 205.552225ms: waiting for machine to come up
	I0213 21:56:48.018429   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:48.018810   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:48.018834   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:48.018786   16956 retry.go:31] will retry after 353.436999ms: waiting for machine to come up
	I0213 21:56:48.373397   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:48.373891   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:48.373916   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:48.373840   16956 retry.go:31] will retry after 442.017345ms: waiting for machine to come up
	I0213 21:56:48.817683   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:48.818120   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:48.818158   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:48.818082   16956 retry.go:31] will retry after 401.54804ms: waiting for machine to come up
	I0213 21:56:49.221909   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:49.222386   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:49.222419   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:49.222321   16956 retry.go:31] will retry after 599.416194ms: waiting for machine to come up
	I0213 21:56:49.823133   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:49.823555   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:49.823592   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:49.823490   16956 retry.go:31] will retry after 792.814217ms: waiting for machine to come up
	I0213 21:56:50.617375   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:50.617929   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:50.617959   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:50.617852   16956 retry.go:31] will retry after 878.606074ms: waiting for machine to come up
	I0213 21:56:51.498453   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:51.498829   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:51.498856   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:51.498787   16956 retry.go:31] will retry after 1.376121244s: waiting for machine to come up
	I0213 21:56:52.876139   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:52.876641   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:52.876669   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:52.876587   16956 retry.go:31] will retry after 1.235409518s: waiting for machine to come up
	I0213 21:56:54.113466   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:54.113920   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:54.113947   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:54.113849   16956 retry.go:31] will retry after 1.675686898s: waiting for machine to come up
	I0213 21:56:55.791122   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:55.791540   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:55.791579   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:55.791458   16956 retry.go:31] will retry after 2.662216547s: waiting for machine to come up
	I0213 21:56:58.457312   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:56:58.457693   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:56:58.457723   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:56:58.457640   16956 retry.go:31] will retry after 2.61351666s: waiting for machine to come up
	I0213 21:57:01.072944   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:01.073387   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:57:01.073415   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:57:01.073325   16956 retry.go:31] will retry after 2.98804372s: waiting for machine to come up
	I0213 21:57:04.065418   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:04.065899   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find current IP address of domain addons-548360 in network mk-addons-548360
	I0213 21:57:04.065930   16934 main.go:141] libmachine: (addons-548360) DBG | I0213 21:57:04.065827   16956 retry.go:31] will retry after 4.324379457s: waiting for machine to come up
	I0213 21:57:08.393248   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:08.393624   16934 main.go:141] libmachine: (addons-548360) Found IP for machine: 192.168.39.217
	I0213 21:57:08.393649   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has current primary IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:08.393659   16934 main.go:141] libmachine: (addons-548360) Reserving static IP address...
	I0213 21:57:08.394090   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find host DHCP lease matching {name: "addons-548360", mac: "52:54:00:25:20:5b", ip: "192.168.39.217"} in network mk-addons-548360
	I0213 21:57:08.475351   16934 main.go:141] libmachine: (addons-548360) Reserved static IP address: 192.168.39.217
	I0213 21:57:08.475375   16934 main.go:141] libmachine: (addons-548360) Waiting for SSH to be available...
	I0213 21:57:08.475420   16934 main.go:141] libmachine: (addons-548360) DBG | Getting to WaitForSSH function...
	I0213 21:57:08.478138   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:08.478490   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360
	I0213 21:57:08.478511   16934 main.go:141] libmachine: (addons-548360) DBG | unable to find defined IP address of network mk-addons-548360 interface with MAC address 52:54:00:25:20:5b
	I0213 21:57:08.478790   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH client type: external
	I0213 21:57:08.478818   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa (-rw-------)
	I0213 21:57:08.478858   16934 main.go:141] libmachine: (addons-548360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 21:57:08.478875   16934 main.go:141] libmachine: (addons-548360) DBG | About to run SSH command:
	I0213 21:57:08.478896   16934 main.go:141] libmachine: (addons-548360) DBG | exit 0
	I0213 21:57:08.489393   16934 main.go:141] libmachine: (addons-548360) DBG | SSH cmd err, output: exit status 255: 
	I0213 21:57:08.489421   16934 main.go:141] libmachine: (addons-548360) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0213 21:57:08.489429   16934 main.go:141] libmachine: (addons-548360) DBG | command : exit 0
	I0213 21:57:08.489435   16934 main.go:141] libmachine: (addons-548360) DBG | err     : exit status 255
	I0213 21:57:08.489447   16934 main.go:141] libmachine: (addons-548360) DBG | output  : 
	I0213 21:57:11.490210   16934 main.go:141] libmachine: (addons-548360) DBG | Getting to WaitForSSH function...
	I0213 21:57:11.492757   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.493122   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.493166   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.493222   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH client type: external
	I0213 21:57:11.493249   16934 main.go:141] libmachine: (addons-548360) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa (-rw-------)
	I0213 21:57:11.493286   16934 main.go:141] libmachine: (addons-548360) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 21:57:11.493325   16934 main.go:141] libmachine: (addons-548360) DBG | About to run SSH command:
	I0213 21:57:11.493356   16934 main.go:141] libmachine: (addons-548360) DBG | exit 0
	I0213 21:57:11.589967   16934 main.go:141] libmachine: (addons-548360) DBG | SSH cmd err, output: <nil>: 
	I0213 21:57:11.590212   16934 main.go:141] libmachine: (addons-548360) KVM machine creation complete!
	I0213 21:57:11.590560   16934 main.go:141] libmachine: (addons-548360) Calling .GetConfigRaw
	I0213 21:57:11.591153   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:11.591349   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:11.591493   16934 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 21:57:11.591509   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:11.592827   16934 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 21:57:11.592846   16934 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 21:57:11.592856   16934 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 21:57:11.592866   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:11.595301   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.595698   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.595723   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.595896   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:11.596216   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.596377   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.596514   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:11.596717   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:11.597207   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:11.597227   16934 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 21:57:11.729363   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 21:57:11.729439   16934 main.go:141] libmachine: Detecting the provisioner...
	I0213 21:57:11.729454   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:11.732148   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.732530   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.732561   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.732743   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:11.732938   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.733093   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.733235   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:11.733415   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:11.733712   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:11.733723   16934 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 21:57:11.866937   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 21:57:11.867092   16934 main.go:141] libmachine: found compatible host: buildroot
	I0213 21:57:11.867119   16934 main.go:141] libmachine: Provisioning with buildroot...
	I0213 21:57:11.867131   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:57:11.867417   16934 buildroot.go:166] provisioning hostname "addons-548360"
	I0213 21:57:11.867440   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:57:11.867665   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:11.870535   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.870962   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:11.871000   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:11.871147   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:11.871392   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.871602   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:11.871715   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:11.871899   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:11.872203   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:11.872220   16934 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-548360 && echo "addons-548360" | sudo tee /etc/hostname
	I0213 21:57:12.019622   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-548360
	
	I0213 21:57:12.019655   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.022459   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.022777   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.022814   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.022963   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.023178   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.023346   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.023487   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.023655   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:12.023969   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:12.023987   16934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-548360' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-548360/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-548360' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 21:57:12.163895   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 21:57:12.163923   16934 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 21:57:12.163956   16934 buildroot.go:174] setting up certificates
	I0213 21:57:12.163969   16934 provision.go:83] configureAuth start
	I0213 21:57:12.163982   16934 main.go:141] libmachine: (addons-548360) Calling .GetMachineName
	I0213 21:57:12.164252   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:12.166791   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.167133   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.167168   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.167345   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.169702   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.170072   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.170104   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.170211   16934 provision.go:138] copyHostCerts
	I0213 21:57:12.170298   16934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 21:57:12.170446   16934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 21:57:12.170513   16934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 21:57:12.170564   16934 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.addons-548360 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube addons-548360]
	I0213 21:57:12.411394   16934 provision.go:172] copyRemoteCerts
	I0213 21:57:12.411461   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 21:57:12.411482   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.414122   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.414437   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.414461   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.414651   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.414845   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.414979   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.415116   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:12.510978   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 21:57:12.535503   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0213 21:57:12.561952   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 21:57:12.586230   16934 provision.go:86] duration metric: configureAuth took 422.246144ms
	I0213 21:57:12.586258   16934 buildroot.go:189] setting minikube options for container-runtime
	I0213 21:57:12.586451   16934 config.go:182] Loaded profile config "addons-548360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 21:57:12.586520   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.589319   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.589706   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.589738   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.589978   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.590184   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.590358   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.590477   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.590642   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:12.590943   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:12.590958   16934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 21:57:12.914808   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 21:57:12.914834   16934 main.go:141] libmachine: Checking connection to Docker...
	I0213 21:57:12.914848   16934 main.go:141] libmachine: (addons-548360) Calling .GetURL
	I0213 21:57:12.916240   16934 main.go:141] libmachine: (addons-548360) DBG | Using libvirt version 6000000
	I0213 21:57:12.918551   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.918976   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.919004   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.919177   16934 main.go:141] libmachine: Docker is up and running!
	I0213 21:57:12.919195   16934 main.go:141] libmachine: Reticulating splines...
	I0213 21:57:12.919212   16934 client.go:171] LocalClient.Create took 27.183268828s
	I0213 21:57:12.919238   16934 start.go:167] duration metric: libmachine.API.Create for "addons-548360" took 27.18335019s
	I0213 21:57:12.919251   16934 start.go:300] post-start starting for "addons-548360" (driver="kvm2")
	I0213 21:57:12.919267   16934 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 21:57:12.919289   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:12.919521   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 21:57:12.919547   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:12.921705   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.922023   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:12.922065   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:12.922201   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:12.922491   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:12.922696   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:12.922843   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:13.019668   16934 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 21:57:13.023917   16934 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 21:57:13.023951   16934 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 21:57:13.024031   16934 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 21:57:13.024055   16934 start.go:303] post-start completed in 104.795067ms
	I0213 21:57:13.024089   16934 main.go:141] libmachine: (addons-548360) Calling .GetConfigRaw
	I0213 21:57:13.024654   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:13.027127   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.027400   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.027422   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.027659   16934 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/config.json ...
	I0213 21:57:13.027869   16934 start.go:128] duration metric: createHost completed in 27.309278567s
	I0213 21:57:13.027893   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:13.030177   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.030497   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.030525   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.030682   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:13.030884   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.031012   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.031129   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:13.031249   16934 main.go:141] libmachine: Using SSH client type: native
	I0213 21:57:13.031539   16934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0213 21:57:13.031551   16934 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 21:57:13.162435   16934 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707861433.146678870
	
	I0213 21:57:13.162455   16934 fix.go:206] guest clock: 1707861433.146678870
	I0213 21:57:13.162464   16934 fix.go:219] Guest: 2024-02-13 21:57:13.14667887 +0000 UTC Remote: 2024-02-13 21:57:13.027880377 +0000 UTC m=+27.428197058 (delta=118.798493ms)
	I0213 21:57:13.162524   16934 fix.go:190] guest clock delta is within tolerance: 118.798493ms
	I0213 21:57:13.162531   16934 start.go:83] releasing machines lock for "addons-548360", held for 27.444007014s
	I0213 21:57:13.162562   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.162844   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:13.165380   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.165773   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.165803   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.166010   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.166524   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.166699   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:13.166816   16934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 21:57:13.166859   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:13.166986   16934 ssh_runner.go:195] Run: cat /version.json
	I0213 21:57:13.167012   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:13.169714   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.169881   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.170122   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.170181   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.170256   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:13.170290   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:13.170298   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:13.170483   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:13.170485   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.170680   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:13.170691   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:13.170825   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:13.170845   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:13.170925   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:13.263317   16934 ssh_runner.go:195] Run: systemctl --version
	I0213 21:57:13.285698   16934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 21:57:13.450747   16934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 21:57:13.457033   16934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 21:57:13.457107   16934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 21:57:13.474063   16934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 21:57:13.474085   16934 start.go:475] detecting cgroup driver to use...
	I0213 21:57:13.474212   16934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 21:57:13.492777   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 21:57:13.506672   16934 docker.go:217] disabling cri-docker service (if available) ...
	I0213 21:57:13.506750   16934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 21:57:13.520577   16934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 21:57:13.534578   16934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 21:57:13.647415   16934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 21:57:13.770687   16934 docker.go:233] disabling docker service ...
	I0213 21:57:13.770763   16934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 21:57:13.784855   16934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 21:57:13.797613   16934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 21:57:13.910187   16934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 21:57:14.020787   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 21:57:14.033523   16934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 21:57:14.050901   16934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 21:57:14.050974   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.061538   16934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 21:57:14.061603   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.072429   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.083849   16934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 21:57:14.095122   16934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 21:57:14.106290   16934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 21:57:14.116196   16934 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 21:57:14.116273   16934 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 21:57:14.129705   16934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 21:57:14.139786   16934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 21:57:14.238953   16934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 21:57:14.405093   16934 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 21:57:14.405167   16934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 21:57:14.409981   16934 start.go:543] Will wait 60s for crictl version
	I0213 21:57:14.410037   16934 ssh_runner.go:195] Run: which crictl
	I0213 21:57:14.413605   16934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 21:57:14.448266   16934 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 21:57:14.448402   16934 ssh_runner.go:195] Run: crio --version
	I0213 21:57:14.499943   16934 ssh_runner.go:195] Run: crio --version
	I0213 21:57:14.552154   16934 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 21:57:14.553432   16934 main.go:141] libmachine: (addons-548360) Calling .GetIP
	I0213 21:57:14.555983   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:14.556330   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:14.556347   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:14.556559   16934 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 21:57:14.560599   16934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 21:57:14.573837   16934 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 21:57:14.573915   16934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:14.608580   16934 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 21:57:14.608666   16934 ssh_runner.go:195] Run: which lz4
	I0213 21:57:14.612512   16934 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 21:57:14.616610   16934 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 21:57:14.616647   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 21:57:16.408492   16934 crio.go:444] Took 1.796006 seconds to copy over tarball
	I0213 21:57:16.408566   16934 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 21:57:19.845841   16934 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.437246871s)
	I0213 21:57:19.845892   16934 crio.go:451] Took 3.437365 seconds to extract the tarball
	I0213 21:57:19.845907   16934 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 21:57:19.886917   16934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 21:57:19.959974   16934 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 21:57:19.960000   16934 cache_images.go:84] Images are preloaded, skipping loading
	I0213 21:57:19.960092   16934 ssh_runner.go:195] Run: crio config
	I0213 21:57:20.029119   16934 cni.go:84] Creating CNI manager for ""
	I0213 21:57:20.029138   16934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:57:20.029157   16934 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 21:57:20.029174   16934 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-548360 NodeName:addons-548360 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 21:57:20.029329   16934 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-548360"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 21:57:20.029417   16934 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-548360 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 21:57:20.029481   16934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 21:57:20.038121   16934 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 21:57:20.038205   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 21:57:20.046271   16934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0213 21:57:20.063065   16934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 21:57:20.079248   16934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0213 21:57:20.094311   16934 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0213 21:57:20.098033   16934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 21:57:20.110872   16934 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360 for IP: 192.168.39.217
	I0213 21:57:20.110905   16934 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.111035   16934 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 21:57:20.277450   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt ...
	I0213 21:57:20.277477   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt: {Name:mk31e81c6fcf369272e568a89360f64eaee632c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.277635   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key ...
	I0213 21:57:20.277647   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key: {Name:mk5a13bfb25b8f575804165b4b8a96685b384af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.277713   16934 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 21:57:20.445135   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt ...
	I0213 21:57:20.445165   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt: {Name:mka72b4c29ed9f2eedab8eb8d31a798dd480cbc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.445319   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key ...
	I0213 21:57:20.445330   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key: {Name:mkcf59b560f8ce9f58eb3ce5a7742414c4473ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.445431   16934 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.key
	I0213 21:57:20.445445   16934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt with IP's: []
	I0213 21:57:20.763645   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt ...
	I0213 21:57:20.763681   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: {Name:mk2c996c13a9e43ea51358519a302c77d5aaecdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.763905   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.key ...
	I0213 21:57:20.763921   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.key: {Name:mkc0a31db82e609a57c11d8ec4cf3f8e14dda8ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.764020   16934 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f
	I0213 21:57:20.764039   16934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 21:57:20.920272   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f ...
	I0213 21:57:20.920304   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f: {Name:mk302d4b693f6d2f2213e0fbf36bf07e73d6785e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.920477   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f ...
	I0213 21:57:20.920494   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f: {Name:mk13522a64bd02534e8ec080df3d0b52a53cf69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.920590   16934 certs.go:337] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt
	I0213 21:57:20.920660   16934 certs.go:341] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key.891f873f -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key
	I0213 21:57:20.920708   16934 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key
	I0213 21:57:20.920724   16934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt with IP's: []
	I0213 21:57:20.983569   16934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt ...
	I0213 21:57:20.983600   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt: {Name:mk2e43f2d8ba0f16d8d65857771ea6ff735ff239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.983766   16934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key ...
	I0213 21:57:20.983781   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key: {Name:mk1aca1e5999dbaad0b06a5aa832f0f6fd0a622a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:20.983987   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 21:57:20.984022   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 21:57:20.984047   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 21:57:20.984070   16934 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 21:57:20.984625   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 21:57:21.009637   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 21:57:21.033701   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 21:57:21.058037   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 21:57:21.082777   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 21:57:21.109205   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 21:57:21.134205   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 21:57:21.158492   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 21:57:21.181609   16934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 21:57:21.203737   16934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 21:57:21.219946   16934 ssh_runner.go:195] Run: openssl version
	I0213 21:57:21.225702   16934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 21:57:21.235845   16934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:21.240484   16934 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:21.240542   16934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 21:57:21.246279   16934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 21:57:21.256238   16934 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 21:57:21.260501   16934 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 21:57:21.260560   16934 kubeadm.go:404] StartCluster: {Name:addons-548360 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-548360 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:57:21.260626   16934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 21:57:21.260672   16934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 21:57:21.301542   16934 cri.go:89] found id: ""
	I0213 21:57:21.301616   16934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 21:57:21.310558   16934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 21:57:21.320254   16934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 21:57:21.330149   16934 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 21:57:21.330212   16934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 21:57:21.383177   16934 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 21:57:21.383473   16934 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 21:57:21.522929   16934 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 21:57:21.523044   16934 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 21:57:21.523170   16934 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 21:57:21.760803   16934 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 21:57:22.005058   16934 out.go:204]   - Generating certificates and keys ...
	I0213 21:57:22.005162   16934 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 21:57:22.005236   16934 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 21:57:22.055445   16934 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 21:57:22.227856   16934 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 21:57:22.284356   16934 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 21:57:22.659705   16934 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 21:57:22.790844   16934 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 21:57:22.790995   16934 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-548360 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0213 21:57:22.942718   16934 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 21:57:22.942902   16934 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-548360 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0213 21:57:23.060728   16934 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 21:57:23.164026   16934 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 21:57:23.223136   16934 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 21:57:23.223218   16934 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 21:57:23.593052   16934 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 21:57:23.704168   16934 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 21:57:23.849238   16934 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 21:57:23.925681   16934 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 21:57:23.926406   16934 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 21:57:23.928771   16934 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 21:57:23.930838   16934 out.go:204]   - Booting up control plane ...
	I0213 21:57:23.930956   16934 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 21:57:23.931047   16934 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 21:57:23.931164   16934 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 21:57:23.947096   16934 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 21:57:23.947621   16934 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 21:57:23.947669   16934 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 21:57:24.084614   16934 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 21:57:33.085602   16934 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002336 seconds
	I0213 21:57:33.085834   16934 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 21:57:33.101909   16934 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 21:57:33.634165   16934 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 21:57:33.634401   16934 kubeadm.go:322] [mark-control-plane] Marking the node addons-548360 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 21:57:34.149693   16934 kubeadm.go:322] [bootstrap-token] Using token: cbmtcn.y9dyg9a87331xks9
	I0213 21:57:34.151220   16934 out.go:204]   - Configuring RBAC rules ...
	I0213 21:57:34.151341   16934 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 21:57:34.159192   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 21:57:34.167576   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 21:57:34.171544   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 21:57:34.177646   16934 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 21:57:34.182556   16934 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 21:57:34.199342   16934 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 21:57:34.423464   16934 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 21:57:34.588774   16934 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 21:57:34.589896   16934 kubeadm.go:322] 
	I0213 21:57:34.589952   16934 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 21:57:34.589964   16934 kubeadm.go:322] 
	I0213 21:57:34.590032   16934 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 21:57:34.590059   16934 kubeadm.go:322] 
	I0213 21:57:34.590109   16934 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 21:57:34.590194   16934 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 21:57:34.590278   16934 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 21:57:34.590297   16934 kubeadm.go:322] 
	I0213 21:57:34.590377   16934 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 21:57:34.590394   16934 kubeadm.go:322] 
	I0213 21:57:34.590476   16934 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 21:57:34.590485   16934 kubeadm.go:322] 
	I0213 21:57:34.590575   16934 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 21:57:34.590688   16934 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 21:57:34.590781   16934 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 21:57:34.590792   16934 kubeadm.go:322] 
	I0213 21:57:34.590902   16934 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 21:57:34.591010   16934 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 21:57:34.591034   16934 kubeadm.go:322] 
	I0213 21:57:34.591141   16934 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cbmtcn.y9dyg9a87331xks9 \
	I0213 21:57:34.591269   16934 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 21:57:34.591301   16934 kubeadm.go:322] 	--control-plane 
	I0213 21:57:34.591311   16934 kubeadm.go:322] 
	I0213 21:57:34.591408   16934 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 21:57:34.591419   16934 kubeadm.go:322] 
	I0213 21:57:34.591510   16934 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cbmtcn.y9dyg9a87331xks9 \
	I0213 21:57:34.591643   16934 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 21:57:34.591968   16934 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 21:57:34.591994   16934 cni.go:84] Creating CNI manager for ""
	I0213 21:57:34.592010   16934 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:57:34.593754   16934 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 21:57:34.595036   16934 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 21:57:34.646395   16934 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 21:57:34.707445   16934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 21:57:34.707538   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:34.707541   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=addons-548360 minikube.k8s.io/updated_at=2024_02_13T21_57_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:34.924172   16934 ops.go:34] apiserver oom_adj: -16
	I0213 21:57:34.924293   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:35.424455   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:35.924977   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.424360   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:36.924887   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:37.424355   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:37.925341   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:38.425207   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:38.924889   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:39.424552   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:39.924705   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:40.425099   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:40.924859   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:41.424397   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:41.925361   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:42.424971   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:42.924424   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:43.425109   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:43.924938   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:44.425167   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:44.924635   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:45.424385   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:45.924499   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:46.424527   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:46.924901   16934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 21:57:47.029676   16934 kubeadm.go:1088] duration metric: took 12.322204741s to wait for elevateKubeSystemPrivileges.
	I0213 21:57:47.029706   16934 kubeadm.go:406] StartCluster complete in 25.769151528s
	I0213 21:57:47.029723   16934 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:47.029855   16934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:57:47.030230   16934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 21:57:47.030425   16934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 21:57:47.030482   16934 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0213 21:57:47.030590   16934 addons.go:69] Setting ingress-dns=true in profile "addons-548360"
	I0213 21:57:47.030604   16934 addons.go:69] Setting yakd=true in profile "addons-548360"
	I0213 21:57:47.030617   16934 addons.go:234] Setting addon ingress-dns=true in "addons-548360"
	I0213 21:57:47.030629   16934 addons.go:234] Setting addon yakd=true in "addons-548360"
	I0213 21:57:47.030640   16934 addons.go:69] Setting default-storageclass=true in profile "addons-548360"
	I0213 21:57:47.030659   16934 config.go:182] Loaded profile config "addons-548360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 21:57:47.030666   16934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-548360"
	I0213 21:57:47.030669   16934 addons.go:69] Setting metrics-server=true in profile "addons-548360"
	I0213 21:57:47.030675   16934 addons.go:69] Setting gcp-auth=true in profile "addons-548360"
	I0213 21:57:47.030679   16934 addons.go:69] Setting volumesnapshots=true in profile "addons-548360"
	I0213 21:57:47.030682   16934 addons.go:234] Setting addon metrics-server=true in "addons-548360"
	I0213 21:57:47.030691   16934 addons.go:234] Setting addon volumesnapshots=true in "addons-548360"
	I0213 21:57:47.030660   16934 addons.go:69] Setting inspektor-gadget=true in profile "addons-548360"
	I0213 21:57:47.030697   16934 addons.go:69] Setting helm-tiller=true in profile "addons-548360"
	I0213 21:57:47.030686   16934 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-548360"
	I0213 21:57:47.030691   16934 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-548360"
	I0213 21:57:47.030711   16934 addons.go:69] Setting ingress=true in profile "addons-548360"
	I0213 21:57:47.030716   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030717   16934 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-548360"
	I0213 21:57:47.030723   16934 addons.go:234] Setting addon ingress=true in "addons-548360"
	I0213 21:57:47.030724   16934 addons.go:69] Setting cloud-spanner=true in profile "addons-548360"
	I0213 21:57:47.030726   16934 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-548360"
	I0213 21:57:47.030735   16934 addons.go:234] Setting addon cloud-spanner=true in "addons-548360"
	I0213 21:57:47.030755   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030756   16934 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-548360"
	I0213 21:57:47.030770   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030787   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030636   16934 addons.go:69] Setting registry=true in profile "addons-548360"
	I0213 21:57:47.030828   16934 addons.go:234] Setting addon registry=true in "addons-548360"
	I0213 21:57:47.030859   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030670   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.030670   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031143   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031166   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031184   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030719   16934 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-548360"
	I0213 21:57:47.031241   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031260   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031271   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030706   16934 addons.go:234] Setting addon inspektor-gadget=true in "addons-548360"
	I0213 21:57:47.030722   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031311   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031350   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030717   16934 addons.go:69] Setting storage-provisioner=true in profile "addons-548360"
	I0213 21:57:47.031394   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031398   16934 addons.go:234] Setting addon storage-provisioner=true in "addons-548360"
	I0213 21:57:47.031225   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.030707   16934 addons.go:234] Setting addon helm-tiller=true in "addons-548360"
	I0213 21:57:47.031229   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031425   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.030693   16934 mustload.go:65] Loading cluster: addons-548360
	I0213 21:57:47.031226   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031477   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031505   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031307   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031523   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031564   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031590   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031275   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031633   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031638   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.031663   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031712   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031731   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031905   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.031943   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.031971   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.032006   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.032214   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.045854   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 21:57:47.046776   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.047275   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.047296   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.047646   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.048199   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.048220   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.048237   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0213 21:57:47.048629   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.049042   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.049057   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.049361   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.049545   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.049797   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0213 21:57:47.050432   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.050471   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.050671   16934 config.go:182] Loaded profile config "addons-548360": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 21:57:47.051029   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.051063   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.051502   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.052303   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.052322   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.053241   16934 addons.go:234] Setting addon default-storageclass=true in "addons-548360"
	I0213 21:57:47.053269   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.053590   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.053614   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.057781   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.058554   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.058583   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.088525   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0213 21:57:47.088764   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0213 21:57:47.089162   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.089262   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.089843   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.089863   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.090244   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.090260   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.090545   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.090586   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.091098   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.091150   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.091785   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.091820   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.094306   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38465
	I0213 21:57:47.094592   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0213 21:57:47.094770   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.095403   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.095545   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.095564   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.095818   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0213 21:57:47.095979   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.096263   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.096280   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.096612   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.096673   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.096714   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.097121   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.097155   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.097949   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40881
	I0213 21:57:47.098521   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.098618   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.099050   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.099068   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.099455   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.099591   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.099605   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.100036   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.100067   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.100355   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.100532   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.103621   16934 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-548360"
	I0213 21:57:47.103665   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.104077   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.104109   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.108071   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I0213 21:57:47.108543   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.109096   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.109114   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.109477   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.110007   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.110041   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.110245   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0213 21:57:47.115905   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39271
	I0213 21:57:47.116559   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.117173   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.117205   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.117529   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.118089   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.118129   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.118340   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43627
	I0213 21:57:47.119276   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0213 21:57:47.120443   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.120769   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34787
	I0213 21:57:47.120981   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.121056   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.121082   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.121510   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.121857   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.122084   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.122104   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.122186   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.122478   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.122522   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.122557   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.122693   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.122713   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.122960   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.123701   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.123714   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.124518   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.124757   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.126924   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0213 21:57:47.125558   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.125677   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.130033   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0213 21:57:47.131428   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0213 21:57:47.130566   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I0213 21:57:47.130597   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0213 21:57:47.130850   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.132834   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0213 21:57:47.135131   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0213 21:57:47.134182   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.134213   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0213 21:57:47.134245   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:47.135771   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.135842   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.136020   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38737
	I0213 21:57:47.138009   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0213 21:57:47.139062   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0213 21:57:47.137089   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.137122   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0213 21:57:47.137240   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.137255   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0213 21:57:47.137277   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.137451   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.137624   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I0213 21:57:47.137759   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.140059   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.141030   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:47.142661   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0213 21:57:47.140676   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.141116   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.141209   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.141953   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.142025   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.142251   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.142283   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.142502   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.142621   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0213 21:57:47.145036   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:47.145112   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.146277   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144171   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144215   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144254   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0213 21:57:47.146422   16934 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 21:57:47.146436   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0213 21:57:47.144342   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.146454   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.146456   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.144545   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.144809   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.145478   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.143884   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0213 21:57:47.146526   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0213 21:57:47.146538   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.146538   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.146493   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.147257   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147267   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0213 21:57:47.147320   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147361   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147395   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.147440   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.147477   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.147961   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.147985   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.148374   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.148710   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.148756   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.148834   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:47.148853   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:47.148935   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.149380   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.149396   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.149531   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.149552   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.149797   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.149953   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.150015   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.150218   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.150447   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.150678   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.150824   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0213 21:57:47.151578   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.152662   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.152679   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.152743   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.152798   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.152839   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.153878   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.155566   16934 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0213 21:57:47.156731   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0213 21:57:47.156750   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0213 21:57:47.155651   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.156775   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.158032   16934 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0213 21:57:47.154726   16934 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 21:57:47.154892   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.155063   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.155189   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.154367   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.155793   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.156977   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.159265   16934 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0213 21:57:47.159323   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0213 21:57:47.159344   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.159395   16934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 21:57:47.161217   16934 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 21:57:47.161232   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 21:57:47.161247   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.159480   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 21:57:47.161311   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.159510   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.161358   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.159536   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.161380   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.160074   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.160097   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.162747   16934 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0213 21:57:47.162788   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0213 21:57:47.160303   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.160875   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.164227   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.161610   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.162518   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.164269   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.160177   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.163235   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.164293   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.164110   16934 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0213 21:57:47.164349   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0213 21:57:47.164369   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.164327   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.164529   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.164512   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.164573   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.164613   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.164728   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.165001   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.166018   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.166027   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.166058   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.167409   16934 out.go:177]   - Using image docker.io/registry:2.8.3
	I0213 21:57:47.166108   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.166149   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.166358   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.166405   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.166797   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.166804   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.166969   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.167327   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41987
	I0213 21:57:47.170521   16934 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0213 21:57:47.168953   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.168978   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.169017   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.169511   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.169529   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.170035   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.170086   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.170109   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.170814   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.171989   16934 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0213 21:57:47.172006   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.172012   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0213 21:57:47.172029   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.172031   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.172820   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.172869   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.172908   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.172921   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.172938   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.173000   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33963
	I0213 21:57:47.173007   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.173046   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.173254   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.173319   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.173674   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.173698   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.174191   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.174224   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.174407   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.174685   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.174702   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.175052   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.175204   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.175864   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.177585   16934 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0213 21:57:47.178764   16934 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 21:57:47.178781   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0213 21:57:47.178796   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.176982   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.177406   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.178883   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.178910   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.178121   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.180268   16934 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0213 21:57:47.179123   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.181393   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0213 21:57:47.181407   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0213 21:57:47.181425   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.181600   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.181729   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.183263   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.183859   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.183885   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.184050   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.184236   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.184374   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.184490   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.185009   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.185438   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.185458   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.185620   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.185734   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.185834   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.185929   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.192254   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I0213 21:57:47.192362   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
	I0213 21:57:47.192732   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.192810   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.193237   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.193266   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.193698   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.193704   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.193721   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.194057   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.194098   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.194140   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0213 21:57:47.194474   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.194662   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.194955   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.194982   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.195380   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.195554   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.196027   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.197947   16934 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0213 21:57:47.196695   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.197085   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.199344   16934 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0213 21:57:47.199359   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0213 21:57:47.199376   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.197762   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41563
	I0213 21:57:47.200692   16934 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0213 21:57:47.199793   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:47.201932   16934 out.go:177]   - Using image docker.io/busybox:stable
	I0213 21:57:47.203167   16934 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0213 21:57:47.201948   16934 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 21:57:47.202386   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:47.202561   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.203029   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.204430   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0213 21:57:47.204454   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:47.204463   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.204511   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.204537   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.204584   16934 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 21:57:47.204593   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0213 21:57:47.204606   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.205067   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:47.205117   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.205305   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.205478   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.205539   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:47.207410   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.207766   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.207801   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.207905   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.207977   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.208012   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:47.208058   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.209527   16934 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0213 21:57:47.208422   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.208453   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.208526   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.210797   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 21:57:47.210824   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 21:57:47.210838   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:47.210862   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.210896   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.211378   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.211410   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.211612   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.213531   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.213812   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:47.213831   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:47.213968   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:47.214110   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:47.214237   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:47.214341   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:47.341672   16934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 21:57:47.447686   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0213 21:57:47.452946   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0213 21:57:47.452965   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0213 21:57:47.466985   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 21:57:47.468370   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0213 21:57:47.502966   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0213 21:57:47.538541   16934 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0213 21:57:47.538572   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0213 21:57:47.586308   16934 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0213 21:57:47.586331   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0213 21:57:47.594846   16934 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0213 21:57:47.594869   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0213 21:57:47.604393   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 21:57:47.623156   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0213 21:57:47.655990   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0213 21:57:47.670221   16934 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0213 21:57:47.670248   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0213 21:57:47.671130   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 21:57:47.671147   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0213 21:57:47.679982   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0213 21:57:47.680011   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0213 21:57:47.690361   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0213 21:57:47.690381   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0213 21:57:47.721338   16934 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0213 21:57:47.721368   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0213 21:57:47.744448   16934 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0213 21:57:47.744467   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0213 21:57:47.799636   16934 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0213 21:57:47.799658   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0213 21:57:47.822602   16934 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-548360" context rescaled to 1 replicas
	I0213 21:57:47.822648   16934 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 21:57:47.824421   16934 out.go:177] * Verifying Kubernetes components...
	I0213 21:57:47.825670   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 21:57:47.885945   16934 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 21:57:47.885970   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0213 21:57:47.920330   16934 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0213 21:57:47.920360   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0213 21:57:48.022962   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0213 21:57:48.022993   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0213 21:57:48.024577   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 21:57:48.024597   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 21:57:48.044501   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0213 21:57:48.044532   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0213 21:57:48.045118   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0213 21:57:48.051693   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0213 21:57:48.095850   16934 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0213 21:57:48.095883   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0213 21:57:48.107249   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0213 21:57:48.107277   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0213 21:57:48.161641   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0213 21:57:48.161663   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0213 21:57:48.199292   16934 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 21:57:48.199318   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 21:57:48.212361   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0213 21:57:48.212382   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0213 21:57:48.219986   16934 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:48.220006   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0213 21:57:48.260428   16934 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0213 21:57:48.260449   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0213 21:57:48.296036   16934 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0213 21:57:48.296064   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0213 21:57:48.305511   16934 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0213 21:57:48.305545   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0213 21:57:48.321860   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:48.341904   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 21:57:48.357546   16934 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0213 21:57:48.357569   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0213 21:57:48.373747   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0213 21:57:48.373767   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0213 21:57:48.402607   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0213 21:57:48.453552   16934 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0213 21:57:48.453580   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0213 21:57:48.465745   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0213 21:57:48.465764   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0213 21:57:48.565468   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0213 21:57:48.565489   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0213 21:57:48.567452   16934 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 21:57:48.567471   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0213 21:57:48.632273   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0213 21:57:48.632299   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0213 21:57:48.644098   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0213 21:57:48.705183   16934 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 21:57:48.705206   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0213 21:57:48.777708   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0213 21:57:51.631112   16934 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.289381169s)
	I0213 21:57:51.631157   16934 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 21:57:53.203298   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.755572894s)
	I0213 21:57:53.203353   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:53.203367   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:53.203671   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:53.203692   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:53.203703   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:53.203715   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:53.203950   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:53.203952   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:53.203964   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:54.160937   16934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0213 21:57:54.160973   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:54.164341   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.164789   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:54.164821   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.165081   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:54.165261   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:54.165411   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:54.165574   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:54.350007   16934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0213 21:57:54.384394   16934 addons.go:234] Setting addon gcp-auth=true in "addons-548360"
	I0213 21:57:54.384452   16934 host.go:66] Checking if "addons-548360" exists ...
	I0213 21:57:54.384854   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:54.384905   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:54.413216   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I0213 21:57:54.413683   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:54.414184   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:54.414209   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:54.414520   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:54.415049   16934 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 21:57:54.415089   16934 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 21:57:54.430396   16934 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0213 21:57:54.430880   16934 main.go:141] libmachine: () Calling .GetVersion
	I0213 21:57:54.431367   16934 main.go:141] libmachine: Using API Version  1
	I0213 21:57:54.431389   16934 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 21:57:54.431719   16934 main.go:141] libmachine: () Calling .GetMachineName
	I0213 21:57:54.431923   16934 main.go:141] libmachine: (addons-548360) Calling .GetState
	I0213 21:57:54.433611   16934 main.go:141] libmachine: (addons-548360) Calling .DriverName
	I0213 21:57:54.433855   16934 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0213 21:57:54.433892   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHHostname
	I0213 21:57:54.437038   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.437508   16934 main.go:141] libmachine: (addons-548360) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:20:5b", ip: ""} in network mk-addons-548360: {Iface:virbr1 ExpiryTime:2024-02-13 22:57:02 +0000 UTC Type:0 Mac:52:54:00:25:20:5b Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-548360 Clientid:01:52:54:00:25:20:5b}
	I0213 21:57:54.437540   16934 main.go:141] libmachine: (addons-548360) DBG | domain addons-548360 has defined IP address 192.168.39.217 and MAC address 52:54:00:25:20:5b in network mk-addons-548360
	I0213 21:57:54.437740   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHPort
	I0213 21:57:54.437963   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHKeyPath
	I0213 21:57:54.438158   16934 main.go:141] libmachine: (addons-548360) Calling .GetSSHUsername
	I0213 21:57:54.438369   16934 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/addons-548360/id_rsa Username:docker}
	I0213 21:57:55.188662   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.721646525s)
	I0213 21:57:55.188709   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:55.188721   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:55.189113   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:55.189124   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:55.189131   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:55.189151   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:55.189161   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:55.189376   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:55.189389   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:55.189412   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.183469   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.680449659s)
	I0213 21:57:57.183545   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.560360593s)
	I0213 21:57:57.183575   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183578   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183591   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183594   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.715195126s)
	I0213 21:57:57.183620   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183503   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.579071088s)
	I0213 21:57:57.183637   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183649   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183651   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.527634671s)
	I0213 21:57:57.183659   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183667   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183592   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183681   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183671   16934 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (9.357981717s)
	I0213 21:57:57.183704   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.138561285s)
	I0213 21:57:57.183723   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183732   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.183735   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.132013303s)
	I0213 21:57:57.183753   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.183763   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184085   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184094   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184100   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184114   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184122   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184123   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184187   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184188   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.184214   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184220   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184224   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184229   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184235   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184239   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184244   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184239   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184264   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184287   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184303   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184336   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.184360   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.184378   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.184394   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.184248   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.185166   16934 node_ready.go:35] waiting up to 6m0s for node "addons-548360" to be "Ready" ...
	I0213 21:57:57.185361   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185387   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185394   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.185402   16934 addons.go:470] Verifying addon registry=true in "addons-548360"
	I0213 21:57:57.188119   16934 out.go:177] * Verifying registry addon...
	I0213 21:57:57.185818   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185838   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185858   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185891   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185910   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.185925   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185955   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185971   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.185988   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.186008   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.187435   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.865522581s)
	I0213 21:57:57.187502   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.845558901s)
	I0213 21:57:57.187545   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.784913671s)
	I0213 21:57:57.187611   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.543481909s)
	I0213 21:57:57.187670   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.187713   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.189467   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189481   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189492   16934 addons.go:470] Verifying addon ingress=true in "addons-548360"
	I0213 21:57:57.189512   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.189525   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189533   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.191249   16934 out.go:177] * Verifying ingress addon...
	I0213 21:57:57.189515   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189589   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189593   16934 main.go:141] libmachine: Making call to close driver server
	W0213 21:57:57.189597   16934 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 21:57:57.189601   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.189606   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.189859   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.189940   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.190407   16934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0213 21:57:57.192727   16934 retry.go:31] will retry after 163.08714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0213 21:57:57.192755   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.192764   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.192769   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192773   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192793   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192782   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.192747   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.192849   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.192859   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.193059   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.193070   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.193086   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.193094   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.193706   16934 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0213 21:57:57.193966   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.193986   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194007   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194016   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194021   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194025   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194034   16934 addons.go:470] Verifying addon metrics-server=true in "addons-548360"
	I0213 21:57:57.194055   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194070   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194095   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194105   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194113   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194135   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194162   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.194172   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.194184   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.194191   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.194073   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.194970   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.195006   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.195018   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.196901   16934 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-548360 service yakd-dashboard -n yakd-dashboard
	
	I0213 21:57:57.247881   16934 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0213 21:57:57.247906   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:57.247924   16934 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0213 21:57:57.247942   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:57.253214   16934 node_ready.go:49] node "addons-548360" has status "Ready":"True"
	I0213 21:57:57.253238   16934 node_ready.go:38] duration metric: took 68.050213ms waiting for node "addons-548360" to be "Ready" ...
	I0213 21:57:57.253247   16934 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 21:57:57.272956   16934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace to be "Ready" ...
	I0213 21:57:57.273792   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.273818   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.274200   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.274221   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.283372   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.283399   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.283712   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.283729   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.283736   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.356821   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0213 21:57:57.751886   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:57.754071   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:57.828318   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.05054908s)
	I0213 21:57:57.828367   16934 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.394479094s)
	I0213 21:57:57.829964   16934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0213 21:57:57.828369   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.831524   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.833206   16934 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0213 21:57:57.831838   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.831883   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.834479   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.834508   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:57:57.834520   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:57:57.834517   16934 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0213 21:57:57.834538   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0213 21:57:57.834802   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:57:57.834861   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:57:57.834875   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:57:57.834892   16934 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-548360"
	I0213 21:57:57.836472   16934 out.go:177] * Verifying csi-hostpath-driver addon...
	I0213 21:57:57.838449   16934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0213 21:57:57.942705   16934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0213 21:57:57.942730   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:58.202903   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:58.203280   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:58.350015   16934 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0213 21:57:58.350038   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0213 21:57:58.389433   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:58.465652   16934 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 21:57:58.465673   16934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0213 21:57:58.513125   16934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0213 21:57:58.703609   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:58.708801   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:58.886416   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:59.229280   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:59.229388   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:59.477299   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:57:59.523584   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:57:59.724203   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:57:59.725189   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:57:59.874334   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:00.215111   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:00.215864   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:00.361719   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:00.740344   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:00.740964   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:00.833715   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.476828323s)
	I0213 21:58:00.833785   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:00.833797   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:00.834211   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:00.834235   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:00.834246   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:00.834256   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:00.834211   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:58:00.834497   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:58:00.834559   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:00.834579   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:00.849517   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:01.228509   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:01.233861   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:01.321447   16934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.808278554s)
	I0213 21:58:01.321512   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.321526   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:01.321781   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.321806   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.321816   16934 main.go:141] libmachine: Making call to close driver server
	I0213 21:58:01.321824   16934 main.go:141] libmachine: (addons-548360) Calling .Close
	I0213 21:58:01.322265   16934 main.go:141] libmachine: Successfully made call to close driver server
	I0213 21:58:01.322280   16934 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 21:58:01.322283   16934 main.go:141] libmachine: (addons-548360) DBG | Closing plugin on server side
	I0213 21:58:01.324584   16934 addons.go:470] Verifying addon gcp-auth=true in "addons-548360"
	I0213 21:58:01.326166   16934 out.go:177] * Verifying gcp-auth addon...
	I0213 21:58:01.328887   16934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0213 21:58:01.340329   16934 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0213 21:58:01.340354   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:01.353522   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:01.700118   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:01.700891   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:01.783604   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:01.836616   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:01.858743   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:02.212067   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.218210   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:02.340125   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:02.358420   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:02.702335   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:02.702712   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:02.834849   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:02.852913   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:03.208098   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.208103   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:03.332856   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:03.346363   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:03.700660   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:03.702315   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:03.834353   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:03.846072   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:04.201526   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:04.201986   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:04.304230   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:04.336752   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:04.360021   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:04.699665   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:04.704701   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:04.860057   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:04.860700   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:05.214363   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:05.216095   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:05.333009   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:05.347438   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:05.711108   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:05.711487   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:05.833800   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:05.848191   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:06.198656   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:06.200643   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:06.335935   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:06.344046   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:06.703116   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:06.703234   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:06.785163   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:06.833994   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:06.847873   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:07.198764   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:07.210929   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:07.333686   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:07.345188   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:07.700061   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:07.700346   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:07.834681   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:07.870429   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:08.349075   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:08.350349   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:08.350919   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:08.351069   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:08.708917   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:08.709899   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:08.786493   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:08.838999   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:08.845127   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:09.200107   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:09.201246   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:09.335540   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:09.344400   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:09.699941   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:09.700420   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:09.833132   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:09.845549   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:10.219944   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:10.220183   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:10.332941   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:10.344534   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:10.698002   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:10.699732   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:10.835853   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:10.846454   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:11.201721   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:11.204181   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:11.279842   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:11.333832   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:11.345773   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:11.699760   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:11.701420   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:11.832699   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:11.857621   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:12.219290   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:12.219436   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:12.333315   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:12.349937   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:12.702512   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:12.715014   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.102504   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.115059   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:13.200907   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:13.201243   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.281646   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:13.334486   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.356803   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:13.700144   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:13.700184   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:13.834035   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:13.845253   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:14.199485   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:14.199706   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:14.333694   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:14.344900   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:14.705214   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:14.707008   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:14.834291   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:14.845172   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.199603   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:15.202909   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:15.282604   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:15.334057   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:15.346190   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.698256   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:15.699481   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:15.883703   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:15.885156   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.198539   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:16.200784   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:16.333507   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.351847   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:16.732340   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:16.732670   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:16.836054   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:16.861095   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.488184   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:17.488330   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:17.488391   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:17.490375   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:17.498128   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:17.699304   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:17.701133   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:17.833645   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:17.849555   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:18.199689   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.201202   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:18.333382   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:18.347584   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:18.699820   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:18.701140   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:18.833511   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:18.845318   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:19.198924   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:19.199373   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:19.333948   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:19.345386   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:19.699097   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:19.699654   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:19.779904   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:19.833625   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:19.845946   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:20.198712   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:20.198988   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:20.332882   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:20.344380   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:20.699319   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:20.699856   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:20.833660   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:20.847341   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:21.199778   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:21.200378   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:21.333697   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:21.343901   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:21.835764   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:21.836061   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:21.840469   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:21.840615   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:21.845826   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:22.198371   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:22.201260   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:22.335345   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:22.353231   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:22.698434   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:22.698787   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:22.833767   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:22.845032   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:23.201216   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:23.201290   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:23.333017   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:23.345694   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:23.699102   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:23.700202   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:23.834595   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:23.864500   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:24.198306   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:24.199153   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:24.281399   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:24.333615   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:24.348722   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:24.699050   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:24.705649   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:24.833288   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:24.851799   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:25.198852   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:25.202650   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:25.334891   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:25.343993   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:25.698900   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:25.703040   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:25.833249   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:25.848363   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:26.199412   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:26.199549   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:26.333590   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:26.347914   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:26.698750   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:26.699295   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:26.783436   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:26.833424   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:26.845424   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:27.198326   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:27.203981   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:27.333460   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:27.350566   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:27.699128   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:27.705033   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:27.835532   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:27.848615   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:28.204226   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:28.204386   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:28.334348   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:28.344312   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.084980   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.085519   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.085895   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.093996   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.111280   16934 pod_ready.go:102] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:29.203234   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.204700   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.333500   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.351822   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:29.697795   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:29.700197   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:29.834643   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:29.844847   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.199292   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:30.200866   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:30.333359   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:30.344521   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:30.698722   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:30.698908   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:30.793111   16934 pod_ready.go:92] pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.793135   16934 pod_ready.go:81] duration metric: took 33.520147005s waiting for pod "coredns-5dd5756b68-hlmz9" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.793143   16934 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.799105   16934 pod_ready.go:92] pod "etcd-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.799138   16934 pod_ready.go:81] duration metric: took 5.988013ms waiting for pod "etcd-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.799147   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.808659   16934 pod_ready.go:92] pod "kube-apiserver-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.808678   16934 pod_ready.go:81] duration metric: took 9.525583ms waiting for pod "kube-apiserver-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.808687   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.817804   16934 pod_ready.go:92] pod "kube-controller-manager-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.817831   16934 pod_ready.go:81] duration metric: took 9.136825ms waiting for pod "kube-controller-manager-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.817848   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gkr4l" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.823597   16934 pod_ready.go:92] pod "kube-proxy-gkr4l" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:30.823622   16934 pod_ready.go:81] duration metric: took 5.766025ms waiting for pod "kube-proxy-gkr4l" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.823633   16934 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:30.832480   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:30.844535   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:31.178077   16934 pod_ready.go:92] pod "kube-scheduler-addons-548360" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:31.178098   16934 pod_ready.go:81] duration metric: took 354.457599ms waiting for pod "kube-scheduler-addons-548360" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:31.178108   16934 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:31.197489   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:31.199237   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:31.333145   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:31.344419   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:31.702538   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:31.705733   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:31.836603   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:31.873088   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:32.197701   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:32.198640   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:32.335929   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:32.343718   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:32.699856   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:32.700189   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:32.833417   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:32.844800   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:33.186031   16934 pod_ready.go:102] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:33.198776   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:33.202827   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.333284   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.344120   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:33.698860   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:33.700345   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:33.836067   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:33.849554   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:34.331262   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:34.334236   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:34.335964   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:34.343546   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:34.705550   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:34.708017   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:34.834815   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:34.845274   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:35.191157   16934 pod_ready.go:102] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:35.205241   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:35.207918   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:35.333315   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:35.348217   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:35.709680   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:35.715909   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:35.836245   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:35.872134   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:36.206766   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:36.213491   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:36.333647   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:36.378766   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:36.730508   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:36.739411   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:36.843193   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:36.859920   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:37.198337   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:37.201862   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:37.338604   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:37.385197   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:37.690740   16934 pod_ready.go:102] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"False"
	I0213 21:58:37.700030   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:37.711768   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:37.833487   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:37.847399   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.198290   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:38.200246   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:38.351665   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:38.352330   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.730522   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:38.734536   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:38.824769   16934 pod_ready.go:92] pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:38.824798   16934 pod_ready.go:81] duration metric: took 7.646684168s waiting for pod "metrics-server-69cf46c98-ghxhg" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:38.824809   16934 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mhcwx" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:38.839634   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:38.841561   16934 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-mhcwx" in "kube-system" namespace has status "Ready":"True"
	I0213 21:58:38.841583   16934 pod_ready.go:81] duration metric: took 16.766832ms waiting for pod "nvidia-device-plugin-daemonset-mhcwx" in "kube-system" namespace to be "Ready" ...
	I0213 21:58:38.841605   16934 pod_ready.go:38] duration metric: took 41.5883375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 21:58:38.841624   16934 api_server.go:52] waiting for apiserver process to appear ...
	I0213 21:58:38.841682   16934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 21:58:38.852182   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:38.932262   16934 api_server.go:72] duration metric: took 51.109583568s to wait for apiserver process to appear ...
	I0213 21:58:38.932292   16934 api_server.go:88] waiting for apiserver healthz status ...
	I0213 21:58:38.932319   16934 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0213 21:58:38.937752   16934 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0213 21:58:38.939365   16934 api_server.go:141] control plane version: v1.28.4
	I0213 21:58:38.939388   16934 api_server.go:131] duration metric: took 7.089518ms to wait for apiserver health ...
	I0213 21:58:38.939396   16934 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 21:58:38.961653   16934 system_pods.go:59] 18 kube-system pods found
	I0213 21:58:38.961702   16934 system_pods.go:61] "coredns-5dd5756b68-hlmz9" [8da21de0-1ed2-4221-8e70-36bbe7832fe0] Running
	I0213 21:58:38.961712   16934 system_pods.go:61] "csi-hostpath-attacher-0" [f3d05280-dffc-4b3e-87af-241451cc1cdc] Running
	I0213 21:58:38.961719   16934 system_pods.go:61] "csi-hostpath-resizer-0" [6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088] Running
	I0213 21:58:38.961731   16934 system_pods.go:61] "csi-hostpathplugin-f89wf" [4a792c70-a32f-4608-98ec-26b9c817b4f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0213 21:58:38.961743   16934 system_pods.go:61] "etcd-addons-548360" [bb102c25-39b6-4c8f-89ed-2325429ec12c] Running
	I0213 21:58:38.961755   16934 system_pods.go:61] "kube-apiserver-addons-548360" [e8714aaf-bf32-4429-be1c-67c8f3156cc9] Running
	I0213 21:58:38.961765   16934 system_pods.go:61] "kube-controller-manager-addons-548360" [eee31965-d4a3-4c21-ad11-48490702b453] Running
	I0213 21:58:38.961773   16934 system_pods.go:61] "kube-ingress-dns-minikube" [f1e93909-d75e-4377-be18-60377f7ce06d] Running
	I0213 21:58:38.961782   16934 system_pods.go:61] "kube-proxy-gkr4l" [2ea7ce55-faee-4a44-a16d-98788c2932b6] Running
	I0213 21:58:38.961792   16934 system_pods.go:61] "kube-scheduler-addons-548360" [48e6baab-2960-4701-88b0-43e9c88c673c] Running
	I0213 21:58:38.961804   16934 system_pods.go:61] "metrics-server-69cf46c98-ghxhg" [723e578e-19de-4bcf-86ed-9de4ffbe5650] Running
	I0213 21:58:38.961814   16934 system_pods.go:61] "nvidia-device-plugin-daemonset-mhcwx" [b9eec8df-b97e-4c67-9916-c51b3600b54b] Running
	I0213 21:58:38.961935   16934 system_pods.go:61] "registry-75mmv" [a146cfb0-9524-40f7-8bab-91a56de079a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0213 21:58:38.962096   16934 system_pods.go:61] "registry-proxy-mfshx" [dad71134-5cc3-4fa4-b391-4a08b89d5d04] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0213 21:58:38.962115   16934 system_pods.go:61] "snapshot-controller-58dbcc7b99-56xxb" [a8d47014-172e-4559-816c-97635f87860a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.962134   16934 system_pods.go:61] "snapshot-controller-58dbcc7b99-8pfd2" [6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.962142   16934 system_pods.go:61] "storage-provisioner" [71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f] Running
	I0213 21:58:38.962155   16934 system_pods.go:61] "tiller-deploy-7b677967b9-jn92b" [2a63d83e-5212-4e3e-9e40-0e87c7d8a741] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0213 21:58:38.962167   16934 system_pods.go:74] duration metric: took 22.764784ms to wait for pod list to return data ...
	I0213 21:58:38.962183   16934 default_sa.go:34] waiting for default service account to be created ...
	I0213 21:58:38.979920   16934 default_sa.go:45] found service account: "default"
	I0213 21:58:38.979949   16934 default_sa.go:55] duration metric: took 17.758442ms for default service account to be created ...
	I0213 21:58:38.979960   16934 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 21:58:38.997511   16934 system_pods.go:86] 18 kube-system pods found
	I0213 21:58:38.997552   16934 system_pods.go:89] "coredns-5dd5756b68-hlmz9" [8da21de0-1ed2-4221-8e70-36bbe7832fe0] Running
	I0213 21:58:38.997560   16934 system_pods.go:89] "csi-hostpath-attacher-0" [f3d05280-dffc-4b3e-87af-241451cc1cdc] Running
	I0213 21:58:38.997567   16934 system_pods.go:89] "csi-hostpath-resizer-0" [6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088] Running
	I0213 21:58:38.997578   16934 system_pods.go:89] "csi-hostpathplugin-f89wf" [4a792c70-a32f-4608-98ec-26b9c817b4f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0213 21:58:38.997585   16934 system_pods.go:89] "etcd-addons-548360" [bb102c25-39b6-4c8f-89ed-2325429ec12c] Running
	I0213 21:58:38.997594   16934 system_pods.go:89] "kube-apiserver-addons-548360" [e8714aaf-bf32-4429-be1c-67c8f3156cc9] Running
	I0213 21:58:38.997605   16934 system_pods.go:89] "kube-controller-manager-addons-548360" [eee31965-d4a3-4c21-ad11-48490702b453] Running
	I0213 21:58:38.997613   16934 system_pods.go:89] "kube-ingress-dns-minikube" [f1e93909-d75e-4377-be18-60377f7ce06d] Running
	I0213 21:58:38.997619   16934 system_pods.go:89] "kube-proxy-gkr4l" [2ea7ce55-faee-4a44-a16d-98788c2932b6] Running
	I0213 21:58:38.997625   16934 system_pods.go:89] "kube-scheduler-addons-548360" [48e6baab-2960-4701-88b0-43e9c88c673c] Running
	I0213 21:58:38.997631   16934 system_pods.go:89] "metrics-server-69cf46c98-ghxhg" [723e578e-19de-4bcf-86ed-9de4ffbe5650] Running
	I0213 21:58:38.997637   16934 system_pods.go:89] "nvidia-device-plugin-daemonset-mhcwx" [b9eec8df-b97e-4c67-9916-c51b3600b54b] Running
	I0213 21:58:38.997646   16934 system_pods.go:89] "registry-75mmv" [a146cfb0-9524-40f7-8bab-91a56de079a4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0213 21:58:38.997654   16934 system_pods.go:89] "registry-proxy-mfshx" [dad71134-5cc3-4fa4-b391-4a08b89d5d04] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0213 21:58:38.997667   16934 system_pods.go:89] "snapshot-controller-58dbcc7b99-56xxb" [a8d47014-172e-4559-816c-97635f87860a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.997679   16934 system_pods.go:89] "snapshot-controller-58dbcc7b99-8pfd2" [6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0213 21:58:38.997685   16934 system_pods.go:89] "storage-provisioner" [71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f] Running
	I0213 21:58:38.997693   16934 system_pods.go:89] "tiller-deploy-7b677967b9-jn92b" [2a63d83e-5212-4e3e-9e40-0e87c7d8a741] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0213 21:58:38.997702   16934 system_pods.go:126] duration metric: took 17.736144ms to wait for k8s-apps to be running ...
	I0213 21:58:38.997712   16934 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 21:58:38.997766   16934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 21:58:39.047538   16934 system_svc.go:56] duration metric: took 49.816812ms WaitForService to wait for kubelet.
	I0213 21:58:39.047568   16934 kubeadm.go:581] duration metric: took 51.224893413s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 21:58:39.047591   16934 node_conditions.go:102] verifying NodePressure condition ...
	I0213 21:58:39.055663   16934 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 21:58:39.055696   16934 node_conditions.go:123] node cpu capacity is 2
	I0213 21:58:39.055715   16934 node_conditions.go:105] duration metric: took 8.118361ms to run NodePressure ...
	I0213 21:58:39.055728   16934 start.go:228] waiting for startup goroutines ...
	I0213 21:58:39.199611   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:39.199689   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:39.333368   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:39.345444   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:39.699315   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:39.700897   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:39.835546   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:39.853838   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:40.198863   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:40.199081   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:40.333433   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:40.344586   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:40.698485   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:40.699026   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:40.834135   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:40.848384   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:41.197926   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:41.200177   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:41.334284   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:41.346044   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:41.702323   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:41.703289   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:41.833824   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:41.853059   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:42.198225   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:42.199115   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:42.333906   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:42.344832   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:42.699072   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:42.699213   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:42.835628   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:42.844821   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:43.198627   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:43.199020   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:43.339064   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:43.344849   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:43.698908   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:43.708112   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:43.833752   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:43.855862   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:44.200853   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:44.201606   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:44.348136   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:44.351222   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:44.697778   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:44.698137   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:44.834300   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:44.845464   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:45.198532   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:45.204140   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:45.333668   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:45.345836   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:45.700935   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:45.700981   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:45.833266   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:45.847066   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:46.199424   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:46.200883   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:46.334182   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:46.344445   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:46.698364   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:46.699772   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:46.840570   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:46.850299   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:47.198870   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:47.200024   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:47.333045   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:47.344123   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:47.699984   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:47.705409   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:47.833476   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:47.844300   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:48.356674   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:48.356792   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:48.356818   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:48.361207   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:48.700433   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:48.700575   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:48.833067   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:48.850618   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:49.200197   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:49.201763   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:49.333435   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:49.344385   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:49.698596   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:49.700105   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:49.833557   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:49.849288   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:50.199594   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:50.200574   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:50.334598   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:50.347674   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:50.708469   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:50.714691   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:50.961622   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:50.996949   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:51.201124   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:51.202389   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:51.332901   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:51.348080   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:51.699801   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:51.700040   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:51.833784   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:51.847914   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:52.197764   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:52.204194   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:52.334188   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:52.344858   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:52.698707   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:52.699287   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:52.833001   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:52.844779   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:53.200520   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:53.200972   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:53.334275   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:53.351082   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:53.700761   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:53.709267   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:53.833278   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:53.844627   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:54.198580   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:54.198737   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:54.335931   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:54.349110   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:54.698536   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:54.699476   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:54.832467   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:54.845299   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:55.198479   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:55.199313   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:55.344640   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:55.355609   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:55.701906   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:55.702319   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:55.833947   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:55.851574   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:56.198070   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:56.198738   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:56.333399   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:56.345188   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:56.699685   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:56.700377   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:56.833967   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:56.852513   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:57.198865   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:57.199043   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:57.333952   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:57.344266   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:57.698833   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:57.699011   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:57.833302   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:57.850666   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:58.200920   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:58.201149   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:58.334009   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:58.344903   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:58.698320   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:58.698990   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:58.833673   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:58.848281   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:59.614032   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:59.614077   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:58:59.614618   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:59.614837   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:59.698350   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:58:59.699053   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:58:59.834348   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:58:59.848613   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:00.198412   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:59:00.200244   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:00.338735   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:00.358645   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:00.701175   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:00.701206   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:59:00.834746   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:00.845406   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:01.197612   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0213 21:59:01.201690   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:01.334352   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:01.345496   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:01.701655   16934 kapi.go:107] duration metric: took 1m4.511243495s to wait for kubernetes.io/minikube-addons=registry ...
	I0213 21:59:01.701708   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:01.833423   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:01.866125   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:02.216647   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:02.338164   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:02.346326   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:02.704917   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:02.842130   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:02.872754   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:03.211524   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:03.335417   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:03.346682   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:03.699316   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:03.833698   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:03.844915   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:04.213994   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:04.337526   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:04.345708   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:04.702859   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:04.835596   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:04.856593   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:05.216678   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:05.334092   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:05.344989   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:05.698636   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:05.842528   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:05.846972   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:06.200187   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:06.334109   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:06.344209   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:06.700687   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:06.832938   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:06.844875   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:07.214051   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:07.332922   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:07.348376   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:07.827084   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:07.838386   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:07.850091   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:08.198790   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:08.333622   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:08.350720   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:08.698235   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:08.833782   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:08.844121   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:09.202570   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:09.333385   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:09.349327   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:09.700015   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:09.833505   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:09.844393   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:10.198062   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:10.333466   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:10.346713   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:10.698313   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:10.833594   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:10.847484   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:11.198498   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:11.332564   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:11.348663   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:11.700929   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:11.833298   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:11.854885   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:12.199396   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:12.333616   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:12.356462   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:12.697902   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:12.833800   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:12.843955   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:13.200299   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:13.333610   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:13.349199   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:13.700810   16934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0213 21:59:13.867862   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:13.869817   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:14.199427   16934 kapi.go:107] duration metric: took 1m17.005721851s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0213 21:59:14.333413   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:14.346680   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:14.847182   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:14.871938   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:15.334025   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:15.344501   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:15.833591   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:15.846121   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:16.335503   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0213 21:59:16.346110   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:16.874581   16934 kapi.go:107] duration metric: took 1m15.545690469s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0213 21:59:16.876403   16934 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-548360 cluster.
	I0213 21:59:16.877721   16934 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0213 21:59:16.875935   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:16.879264   16934 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0213 21:59:17.354841   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:17.954916   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:18.345854   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:18.847382   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:19.345364   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:19.883032   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:20.344416   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:20.844672   16934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0213 21:59:21.345072   16934 kapi.go:107] duration metric: took 1m23.506622295s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0213 21:59:21.346924   16934 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, ingress-dns, nvidia-device-plugin, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0213 21:59:21.348333   16934 addons.go:505] enable addons completed in 1m34.317850901s: enabled=[cloud-spanner storage-provisioner helm-tiller inspektor-gadget metrics-server ingress-dns nvidia-device-plugin yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0213 21:59:21.348384   16934 start.go:233] waiting for cluster config update ...
	I0213 21:59:21.348406   16934 start.go:242] writing updated cluster config ...
	I0213 21:59:21.348659   16934 ssh_runner.go:195] Run: rm -f paused
	I0213 21:59:21.400966   16934 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 21:59:21.402762   16934 out.go:177] * Done! kubectl is now configured to use "addons-548360" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 21:56:58 UTC, ends at Tue 2024-02-13 21:59:36 UTC. --
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.597854398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861576597830733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:452670,},InodesUsed:&UInt64Value{Value:193,},},},}" file="go-grpc-middleware/chain.go:25" id=73209b98-6c4c-47fe-aa52-b3a39dd73491 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.600184955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e219780b-23a1-41a9-9d19-19a292814441 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.600502891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e219780b-23a1-41a9-9d19-19a292814441 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.606714992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02b18dcccee0ae0a246b04e99c2ca592297b2d12c4b14c54796a231a7852a23f,PodSandboxId:a3e836c914df5dacb080336bae47468ba143974f9f76a5610e074604614f095e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1707861573671604645,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cffa27a8-9bfe-47ba-8749-6ec7d781d992,},Annotations:map[string]string{io.kubernetes.container.hash: 19dedfbb,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c001726db08735dbac76e42f656eeaca5563bf951c72753de3582bb3821f69,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1707861560004841360,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: b738c21c,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a48823c17572fe02d98156532ca544172d7772bca2c91c735a7318d0cca97ce,PodSandboxId:6940a26e02f3c794cc07371a3e892908cd7e044d169d72fae3a48a0709e59340,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861558596303047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-9t94c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4bb8c0b-19ba-4d8c-87e7-f7c1607f6803,},Annotations:map[string]string{io.kubernetes.container.hash: 60c3b3bd,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb80cdf6a0b545eb79e34a9d668d21e4d0cd1d4a6884bd24a6dc4a01c27a8d2a,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1707861558105999119,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.
kubernetes.container.hash: eee7b964,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05aa5c4be13e3b00cbbe5874fa8a9c7a3c3aa23d4d4cd89dce224c812db39642,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1707861554096537973,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugi
n-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9659034a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72,PodSandboxId:1857afa97fb8b12b88f072a9b377c56c3c343cb88f1a5bf9146147b293d116af,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,State:CONTAINER_RUNNING,CreatedAt:1707861553028027337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kub
ernetes.pod.name: ingress-nginx-controller-69cff4fd79-9cp25,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b35307cf-04bb-45d3-9312-e76f538fda2f,},Annotations:map[string]string{io.kubernetes.container.hash: 79d24091,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecd7f1bcdc37125ecebe4ea24c3d75b0333e5b0bf69e5387fb34ae1710dbf961,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1707861544470189320,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9699f473,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df254
8c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861541736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a9360e24b5b71974593abc5bca9419ecb110b5cadff48d177098444fb240b0,PodSandboxId:682958d
3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1707861541831484564,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 8f4f1c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4
13dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed,PodSandboxId:c201e09fce62a16f6a898326929755323bac356f9ef96ef3bd52c97e619bdc53,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1707861540261854607,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-mfshx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad71134-5cc3-4fa4-b391-4a08b89d5d04,},Annotations:map[string]string{io.kubernetes.container.hash: 95d7661a,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316017b48544eb4abdccd1d7d8765406ec8e395544ce1ead616277e84f6f7d39,PodSandboxId:0285162c468e9326edc9d1c4a222798c8d08528d3a8d92bfc9b42eb96f7267a2,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1707861534860492410,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-jn92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a63d83e-5212-4e3e-9e40-0e87c7d8a741,},Annotations:map[string]string{io.kubernetes.container.hash: 28201ffa,io.kubernetes.container.
ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48aaa6b69f06e5f60ada0c08741e9787b1d0b15d6225130c70f70eb94cd35ba,PodSandboxId:a5b24d9544aa9081ec08821d8e91ae6787a6b10aa6c5b8c626718d4a14147541,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861531910087929,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller
,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-56xxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d47014-172e-4559-816c-97635f87860a,},Annotations:map[string]string{io.kubernetes.container.hash: ce43ac1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c6484ab3f7a131ca0b23cbfc9965fd451a0580f14c393e64d83d8c6c2e8325,PodSandboxId:281abeb2b2b61b2bd988db365c6a5504ea0408ac8e0d3aac0206a89f8f85239b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1707861531770600076,Labels
:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-lnqs8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fbe4c213-d961-400b-a2f4-611fac2af689,},Annotations:map[string]string{io.kubernetes.container.hash: a0df0609,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf1974208
9ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c9a67d54e843df9877bc45e55b360b07bf3dbbdc3339292bc800cb5278b675,PodSandboxId:bf5586c76450111afe42e1c7c004654b10d28c38fc8f1749f39b4f0e8d167df7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/s
ig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861525920851675,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-8pfd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e,},Annotations:map[string]string{io.kubernetes.container.hash: fc65829b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d,PodSandboxId:71a6c802506a6fb545cbff91ae04826d91f65cca666f683c94aec78ba7e343a7,Metadata:&Container
Metadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,State:CONTAINER_RUNNING,CreatedAt:1707861524237589677,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-75mmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a146cfb0-9524-40f7-8bab-91a56de079a4,},Annotations:map[string]string{io.kubernetes.container.hash: 92387334,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768e5c5d654be9e33f4b897f45602d9d0fec04d123a0f45d4bc27ef3b78e9629,PodSandboxId:9898c56b2710
dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_RUNNING,CreatedAt:1707861514928437313,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35,PodSandboxId:4edb48eb984f3d17c86e036476c120292a3c8d4c0dccd140de26d9b8176708d4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1707861512938294277,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e93909-d75e-4377-be18-60377f7ce06d,},Annotations:map[string]string{io.kubernetes.container.hash: e30ff70a,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ae1d6929fafd0e4ff83dee0146c1834001af5389f5d345e585a9976ab28e34d,PodSandboxId:7e02128bd13f2f959be6dadfebaf054349a328f3274fd5ceb116fd2f2b3642d7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1707861505925143649,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088,},Annotations:map[string]string{io.kubernetes.container.hash: a3ec2f9,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d5d917bf79ca1db4205f8da5f62d489312141a152e163406ee3792bf201c93,PodSandboxId:6deba3bb4df129cd8614aaf1585852c584ad00737c7d5a9d784dcad35968fef8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1707861504155113411,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d05280-dffc-4b3e-87af-241451cc1cdc,},Annotations:map[string]string{io.kubernetes.container.hash: e4b
9663,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744aec35d59c6d98b63cc672bdd3c5f78c5cdb86544eb475074d963532a4b54c,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1707861502372620935,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4217d917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: y
akd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480474df46bfbadef1d32884ad0592dba599989897a33fe941d03cc96faa2c18,PodSandboxId:9898c56b2710dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_EXITED,CreatedAt:17078614824021796
01,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab931
3c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Sta
te:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6d
bc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5
d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map
[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[strin
g]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e219780b-23a1-41a9-9d19-19a292814441 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.726120403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=93e6d6d8-e710-4be1-bad5-55f747e332d0 name=/runtime.v1.RuntimeService/Version
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.726294365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=93e6d6d8-e710-4be1-bad5-55f747e332d0 name=/runtime.v1.RuntimeService/Version
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.727353877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fc6c9c13-6eed-466b-93a1-7aa5101e6c8c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.728507890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861576728488917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:452670,},InodesUsed:&UInt64Value{Value:193,},},},}" file="go-grpc-middleware/chain.go:25" id=fc6c9c13-6eed-466b-93a1-7aa5101e6c8c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.729270981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb534626-9507-41b6-a6c2-700e398a01e5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.729363149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb534626-9507-41b6-a6c2-700e398a01e5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.730521606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02b18dcccee0ae0a246b04e99c2ca592297b2d12c4b14c54796a231a7852a23f,PodSandboxId:a3e836c914df5dacb080336bae47468ba143974f9f76a5610e074604614f095e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1707861573671604645,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cffa27a8-9bfe-47ba-8749-6ec7d781d992,},Annotations:map[string]string{io.kubernetes.container.hash: 19dedfbb,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c001726db08735dbac76e42f656eeaca5563bf951c72753de3582bb3821f69,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1707861560004841360,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: b738c21c,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a48823c17572fe02d98156532ca544172d7772bca2c91c735a7318d0cca97ce,PodSandboxId:6940a26e02f3c794cc07371a3e892908cd7e044d169d72fae3a48a0709e59340,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861558596303047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-9t94c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4bb8c0b-19ba-4d8c-87e7-f7c1607f6803,},Annotations:map[string]string{io.kubernetes.container.hash: 60c3b3bd,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb80cdf6a0b545eb79e34a9d668d21e4d0cd1d4a6884bd24a6dc4a01c27a8d2a,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1707861558105999119,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.
kubernetes.container.hash: eee7b964,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05aa5c4be13e3b00cbbe5874fa8a9c7a3c3aa23d4d4cd89dce224c812db39642,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1707861554096537973,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugi
n-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9659034a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72,PodSandboxId:1857afa97fb8b12b88f072a9b377c56c3c343cb88f1a5bf9146147b293d116af,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,State:CONTAINER_RUNNING,CreatedAt:1707861553028027337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kub
ernetes.pod.name: ingress-nginx-controller-69cff4fd79-9cp25,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b35307cf-04bb-45d3-9312-e76f538fda2f,},Annotations:map[string]string{io.kubernetes.container.hash: 79d24091,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecd7f1bcdc37125ecebe4ea24c3d75b0333e5b0bf69e5387fb34ae1710dbf961,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1707861544470189320,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9699f473,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df254
8c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861541736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a9360e24b5b71974593abc5bca9419ecb110b5cadff48d177098444fb240b0,PodSandboxId:682958d
3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1707861541831484564,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 8f4f1c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4
13dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed,PodSandboxId:c201e09fce62a16f6a898326929755323bac356f9ef96ef3bd52c97e619bdc53,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1707861540261854607,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-mfshx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad71134-5cc3-4fa4-b391-4a08b89d5d04,},Annotations:map[string]string{io.kubernetes.container.hash: 95d7661a,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316017b48544eb4abdccd1d7d8765406ec8e395544ce1ead616277e84f6f7d39,PodSandboxId:0285162c468e9326edc9d1c4a222798c8d08528d3a8d92bfc9b42eb96f7267a2,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1707861534860492410,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-jn92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a63d83e-5212-4e3e-9e40-0e87c7d8a741,},Annotations:map[string]string{io.kubernetes.container.hash: 28201ffa,io.kubernetes.container.
ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48aaa6b69f06e5f60ada0c08741e9787b1d0b15d6225130c70f70eb94cd35ba,PodSandboxId:a5b24d9544aa9081ec08821d8e91ae6787a6b10aa6c5b8c626718d4a14147541,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861531910087929,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller
,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-56xxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d47014-172e-4559-816c-97635f87860a,},Annotations:map[string]string{io.kubernetes.container.hash: ce43ac1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c6484ab3f7a131ca0b23cbfc9965fd451a0580f14c393e64d83d8c6c2e8325,PodSandboxId:281abeb2b2b61b2bd988db365c6a5504ea0408ac8e0d3aac0206a89f8f85239b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1707861531770600076,Labels
:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-lnqs8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fbe4c213-d961-400b-a2f4-611fac2af689,},Annotations:map[string]string{io.kubernetes.container.hash: a0df0609,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf1974208
9ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c9a67d54e843df9877bc45e55b360b07bf3dbbdc3339292bc800cb5278b675,PodSandboxId:bf5586c76450111afe42e1c7c004654b10d28c38fc8f1749f39b4f0e8d167df7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/s
ig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861525920851675,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-8pfd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e,},Annotations:map[string]string{io.kubernetes.container.hash: fc65829b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d,PodSandboxId:71a6c802506a6fb545cbff91ae04826d91f65cca666f683c94aec78ba7e343a7,Metadata:&Container
Metadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,State:CONTAINER_RUNNING,CreatedAt:1707861524237589677,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-75mmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a146cfb0-9524-40f7-8bab-91a56de079a4,},Annotations:map[string]string{io.kubernetes.container.hash: 92387334,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768e5c5d654be9e33f4b897f45602d9d0fec04d123a0f45d4bc27ef3b78e9629,PodSandboxId:9898c56b2710
dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_RUNNING,CreatedAt:1707861514928437313,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35,PodSandboxId:4edb48eb984f3d17c86e036476c120292a3c8d4c0dccd140de26d9b8176708d4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1707861512938294277,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e93909-d75e-4377-be18-60377f7ce06d,},Annotations:map[string]string{io.kubernetes.container.hash: e30ff70a,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ae1d6929fafd0e4ff83dee0146c1834001af5389f5d345e585a9976ab28e34d,PodSandboxId:7e02128bd13f2f959be6dadfebaf054349a328f3274fd5ceb116fd2f2b3642d7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1707861505925143649,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088,},Annotations:map[string]string{io.kubernetes.container.hash: a3ec2f9,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d5d917bf79ca1db4205f8da5f62d489312141a152e163406ee3792bf201c93,PodSandboxId:6deba3bb4df129cd8614aaf1585852c584ad00737c7d5a9d784dcad35968fef8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1707861504155113411,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d05280-dffc-4b3e-87af-241451cc1cdc,},Annotations:map[string]string{io.kubernetes.container.hash: e4b
9663,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744aec35d59c6d98b63cc672bdd3c5f78c5cdb86544eb475074d963532a4b54c,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1707861502372620935,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4217d917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: y
akd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480474df46bfbadef1d32884ad0592dba599989897a33fe941d03cc96faa2c18,PodSandboxId:9898c56b2710dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_EXITED,CreatedAt:17078614824021796
01,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab931
3c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Sta
te:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6d
bc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5
d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map
[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[strin
g]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb534626-9507-41b6-a6c2-700e398a01e5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.881915702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d638c967-ded9-4b6e-adf4-3c37f87efe5c name=/runtime.v1.RuntimeService/Version
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.882004257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d638c967-ded9-4b6e-adf4-3c37f87efe5c name=/runtime.v1.RuntimeService/Version
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.883021254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3fe304e1-6a2a-4d12-b056-1c3990c6a4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.884161011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861576884143414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:452670,},InodesUsed:&UInt64Value{Value:193,},},},}" file="go-grpc-middleware/chain.go:25" id=3fe304e1-6a2a-4d12-b056-1c3990c6a4d9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.884771573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f01c1bc-b826-469f-8ebb-06ff5c88eadc name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.884859562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f01c1bc-b826-469f-8ebb-06ff5c88eadc name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.885711667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02b18dcccee0ae0a246b04e99c2ca592297b2d12c4b14c54796a231a7852a23f,PodSandboxId:a3e836c914df5dacb080336bae47468ba143974f9f76a5610e074604614f095e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1707861573671604645,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cffa27a8-9bfe-47ba-8749-6ec7d781d992,},Annotations:map[string]string{io.kubernetes.container.hash: 19dedfbb,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c001726db08735dbac76e42f656eeaca5563bf951c72753de3582bb3821f69,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1707861560004841360,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: b738c21c,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a48823c17572fe02d98156532ca544172d7772bca2c91c735a7318d0cca97ce,PodSandboxId:6940a26e02f3c794cc07371a3e892908cd7e044d169d72fae3a48a0709e59340,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861558596303047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-9t94c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4bb8c0b-19ba-4d8c-87e7-f7c1607f6803,},Annotations:map[string]string{io.kubernetes.container.hash: 60c3b3bd,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb80cdf6a0b545eb79e34a9d668d21e4d0cd1d4a6884bd24a6dc4a01c27a8d2a,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1707861558105999119,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.
kubernetes.container.hash: eee7b964,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05aa5c4be13e3b00cbbe5874fa8a9c7a3c3aa23d4d4cd89dce224c812db39642,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1707861554096537973,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugi
n-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9659034a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72,PodSandboxId:1857afa97fb8b12b88f072a9b377c56c3c343cb88f1a5bf9146147b293d116af,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,State:CONTAINER_RUNNING,CreatedAt:1707861553028027337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kub
ernetes.pod.name: ingress-nginx-controller-69cff4fd79-9cp25,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b35307cf-04bb-45d3-9312-e76f538fda2f,},Annotations:map[string]string{io.kubernetes.container.hash: 79d24091,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecd7f1bcdc37125ecebe4ea24c3d75b0333e5b0bf69e5387fb34ae1710dbf961,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1707861544470189320,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9699f473,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df254
8c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861541736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a9360e24b5b71974593abc5bca9419ecb110b5cadff48d177098444fb240b0,PodSandboxId:682958d
3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1707861541831484564,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 8f4f1c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4
13dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed,PodSandboxId:c201e09fce62a16f6a898326929755323bac356f9ef96ef3bd52c97e619bdc53,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1707861540261854607,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-mfshx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad71134-5cc3-4fa4-b391-4a08b89d5d04,},Annotations:map[string]string{io.kubernetes.container.hash: 95d7661a,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316017b48544eb4abdccd1d7d8765406ec8e395544ce1ead616277e84f6f7d39,PodSandboxId:0285162c468e9326edc9d1c4a222798c8d08528d3a8d92bfc9b42eb96f7267a2,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1707861534860492410,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-jn92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a63d83e-5212-4e3e-9e40-0e87c7d8a741,},Annotations:map[string]string{io.kubernetes.container.hash: 28201ffa,io.kubernetes.container.
ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48aaa6b69f06e5f60ada0c08741e9787b1d0b15d6225130c70f70eb94cd35ba,PodSandboxId:a5b24d9544aa9081ec08821d8e91ae6787a6b10aa6c5b8c626718d4a14147541,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861531910087929,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller
,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-56xxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d47014-172e-4559-816c-97635f87860a,},Annotations:map[string]string{io.kubernetes.container.hash: ce43ac1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c6484ab3f7a131ca0b23cbfc9965fd451a0580f14c393e64d83d8c6c2e8325,PodSandboxId:281abeb2b2b61b2bd988db365c6a5504ea0408ac8e0d3aac0206a89f8f85239b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1707861531770600076,Labels
:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-lnqs8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fbe4c213-d961-400b-a2f4-611fac2af689,},Annotations:map[string]string{io.kubernetes.container.hash: a0df0609,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf1974208
9ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c9a67d54e843df9877bc45e55b360b07bf3dbbdc3339292bc800cb5278b675,PodSandboxId:bf5586c76450111afe42e1c7c004654b10d28c38fc8f1749f39b4f0e8d167df7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/s
ig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861525920851675,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-8pfd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e,},Annotations:map[string]string{io.kubernetes.container.hash: fc65829b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d,PodSandboxId:71a6c802506a6fb545cbff91ae04826d91f65cca666f683c94aec78ba7e343a7,Metadata:&Container
Metadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,State:CONTAINER_RUNNING,CreatedAt:1707861524237589677,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-75mmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a146cfb0-9524-40f7-8bab-91a56de079a4,},Annotations:map[string]string{io.kubernetes.container.hash: 92387334,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768e5c5d654be9e33f4b897f45602d9d0fec04d123a0f45d4bc27ef3b78e9629,PodSandboxId:9898c56b2710
dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_RUNNING,CreatedAt:1707861514928437313,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35,PodSandboxId:4edb48eb984f3d17c86e036476c120292a3c8d4c0dccd140de26d9b8176708d4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1707861512938294277,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e93909-d75e-4377-be18-60377f7ce06d,},Annotations:map[string]string{io.kubernetes.container.hash: e30ff70a,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ae1d6929fafd0e4ff83dee0146c1834001af5389f5d345e585a9976ab28e34d,PodSandboxId:7e02128bd13f2f959be6dadfebaf054349a328f3274fd5ceb116fd2f2b3642d7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1707861505925143649,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088,},Annotations:map[string]string{io.kubernetes.container.hash: a3ec2f9,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d5d917bf79ca1db4205f8da5f62d489312141a152e163406ee3792bf201c93,PodSandboxId:6deba3bb4df129cd8614aaf1585852c584ad00737c7d5a9d784dcad35968fef8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1707861504155113411,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d05280-dffc-4b3e-87af-241451cc1cdc,},Annotations:map[string]string{io.kubernetes.container.hash: e4b
9663,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744aec35d59c6d98b63cc672bdd3c5f78c5cdb86544eb475074d963532a4b54c,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1707861502372620935,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4217d917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: y
akd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480474df46bfbadef1d32884ad0592dba599989897a33fe941d03cc96faa2c18,PodSandboxId:9898c56b2710dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_EXITED,CreatedAt:17078614824021796
01,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab931
3c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Sta
te:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6d
bc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5
d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map
[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[strin
g]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f01c1bc-b826-469f-8ebb-06ff5c88eadc name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.929598431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c3f5426f-dc11-4476-9371-c887d7df9d75 name=/runtime.v1.RuntimeService/Version
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.929694869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c3f5426f-dc11-4476-9371-c887d7df9d75 name=/runtime.v1.RuntimeService/Version
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.931678369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=41908434-337c-4006-a090-bcea9f3bc54a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.933465233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707861576933444149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:452670,},InodesUsed:&UInt64Value{Value:193,},},},}" file="go-grpc-middleware/chain.go:25" id=41908434-337c-4006-a090-bcea9f3bc54a name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.934189191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=53f726a8-79c9-4619-a55d-db7669e84641 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.934292511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=53f726a8-79c9-4619-a55d-db7669e84641 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 21:59:36 addons-548360 crio[716]: time="2024-02-13 21:59:36.934987227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02b18dcccee0ae0a246b04e99c2ca592297b2d12c4b14c54796a231a7852a23f,PodSandboxId:a3e836c914df5dacb080336bae47468ba143974f9f76a5610e074604614f095e,Metadata:&ContainerMetadata{Name:registry-test,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee,State:CONTAINER_EXITED,CreatedAt:1707861573671604645,Labels:map[string]string{io.kubernetes.container.name: registry-test,io.kubernetes.pod.name: registry-test,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cffa27a8-9bfe-47ba-8749-6ec7d781d992,},Annotations:map[string]string{io.kubernetes.container.hash: 19dedfbb,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4c001726db08735dbac76e42f656eeaca5563bf951c72753de3582bb3821f69,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1707861560004841360,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: b738c21c,
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a48823c17572fe02d98156532ca544172d7772bca2c91c735a7318d0cca97ce,PodSandboxId:6940a26e02f3c794cc07371a3e892908cd7e044d169d72fae3a48a0709e59340,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861558596303047,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-9t94c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4bb8c0b-19ba-4d8c-87e7-f7c1607f6803,},Annotations:map[string]string{io.kubernetes.container.hash: 60c3b3bd,io.kubernetes.c
ontainer.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb80cdf6a0b545eb79e34a9d668d21e4d0cd1d4a6884bd24a6dc4a01c27a8d2a,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1707861558105999119,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.
kubernetes.container.hash: eee7b964,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b,PodSandboxId:950e8a5fbeda8ad3734a44c410a56035caa85befb3e307e3c99091a597223ee9,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1707861556114949961,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-j7fcp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 8e7e3b12-1387-41ee-b0f1-93f3b29e86c2,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 6ba4def6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05aa5c4be13e3b00cbbe5874fa8a9c7a3c3aa23d4d4cd89dce224c812db39642,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1707861554096537973,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugi
n-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9659034a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea12f128374e22fb8ecac00ae3c14b3bde25ad92de6e8a67afb66fce120ec72,PodSandboxId:1857afa97fb8b12b88f072a9b377c56c3c343cb88f1a5bf9146147b293d116af,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75,State:CONTAINER_RUNNING,CreatedAt:1707861553028027337,Labels:map[string]string{io.kubernetes.container.name: controller,io.kub
ernetes.pod.name: ingress-nginx-controller-69cff4fd79-9cp25,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b35307cf-04bb-45d3-9312-e76f538fda2f,},Annotations:map[string]string{io.kubernetes.container.hash: 79d24091,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ecd7f1bcdc37125ecebe4ea24c3d75b0333e5b0bf69e5387fb34ae1710dbf961,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image
:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1707861544470189320,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 9699f473,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa131601c5a6550aa98b52bf89603690209c2c0b1ac7247009e40f7b122f8a58,PodSandboxId:6df254
8c025b0261b9668642ad2946038076f4dccc4c5942d564a4ae602b1db9,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861541736791411,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjclt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 88643b72-5b51-4217-942a-f286ddf52cd0,},Annotations:map[string]string{io.kubernetes.container.hash: f00a0ce0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a9360e24b5b71974593abc5bca9419ecb110b5cadff48d177098444fb240b0,PodSandboxId:682958d
3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1707861541831484564,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 8f4f1c0c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4
13dfe614824a15591e2f08b6d18108b2f4c0e9fd781fe324aa145bab38b992,PodSandboxId:038f551356f3998e69159031a2b885d55652e380c1b671694746d1802c9044ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1707861540375609125,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xfjh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f49049d6-6ca3-4b61-be2a-867c087fa990,},Annotations:map[string]string{io.kubernetes.container.hash: 9d26e618,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed,PodSandboxId:c201e09fce62a16f6a898326929755323bac356f9ef96ef3bd52c97e619bdc53,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5,State:CONTAINER_RUNNING,CreatedAt:1707861540261854607,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-mfshx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dad71134-5cc3-4fa4-b391-4a08b89d5d04,},Annotations:map[string]string{io.kubernetes.container.hash: 95d7661a,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.c
ontainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:316017b48544eb4abdccd1d7d8765406ec8e395544ce1ead616277e84f6f7d39,PodSandboxId:0285162c468e9326edc9d1c4a222798c8d08528d3a8d92bfc9b42eb96f7267a2,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},},ImageRef:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,State:CONTAINER_RUNNING,CreatedAt:1707861534860492410,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-jn92b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a63d83e-5212-4e3e-9e40-0e87c7d8a741,},Annotations:map[string]string{io.kubernetes.container.hash: 28201ffa,io.kubernetes.container.
ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48aaa6b69f06e5f60ada0c08741e9787b1d0b15d6225130c70f70eb94cd35ba,PodSandboxId:a5b24d9544aa9081ec08821d8e91ae6787a6b10aa6c5b8c626718d4a14147541,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861531910087929,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller
,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-56xxb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d47014-172e-4559-816c-97635f87860a,},Annotations:map[string]string{io.kubernetes.container.hash: ce43ac1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3c6484ab3f7a131ca0b23cbfc9965fd451a0580f14c393e64d83d8c6c2e8325,PodSandboxId:281abeb2b2b61b2bd988db365c6a5504ea0408ac8e0d3aac0206a89f8f85239b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},},ImageRef:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,State:CONTAINER_RUNNING,CreatedAt:1707861531770600076,Labels
:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-lnqs8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fbe4c213-d961-400b-a2f4-611fac2af689,},Annotations:map[string]string{io.kubernetes.container.hash: a0df0609,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4816e9514181cfcf42bfcf1ff03f7329c978e42712d24f46ae3e48b3c968676e,PodSandboxId:7e00a6ef707169bf714037bc60a7119e7cf4ec8046a034de08eae658076e77a6,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49,Annotations:map[string]string{},},ImageRef:gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf1974208
9ae1be45b7b8aa49,State:CONTAINER_RUNNING,CreatedAt:1707861530198851088,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-64c8c85f65-bwgbm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 92006ffc-89c1-4ab2-9676-94b45895f5f9,},Annotations:map[string]string{io.kubernetes.container.hash: adc0746c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86c9a67d54e843df9877bc45e55b360b07bf3dbbdc3339292bc800cb5278b675,PodSandboxId:bf5586c76450111afe42e1c7c004654b10d28c38fc8f1749f39b4f0e8d167df7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/s
ig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1707861525920851675,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-8pfd2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e61f7c6-9f12-4be5-b7fd-c0091c1fc31e,},Annotations:map[string]string{io.kubernetes.container.hash: fc65829b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d,PodSandboxId:71a6c802506a6fb545cbff91ae04826d91f65cca666f683c94aec78ba7e343a7,Metadata:&Container
Metadata{Name:registry,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,Annotations:map[string]string{},},ImageRef:docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3,State:CONTAINER_RUNNING,CreatedAt:1707861524237589677,Labels:map[string]string{io.kubernetes.container.name: registry,io.kubernetes.pod.name: registry-75mmv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a146cfb0-9524-40f7-8bab-91a56de079a4,},Annotations:map[string]string{io.kubernetes.container.hash: 92387334,io.kubernetes.container.ports: [{\"containerPort\":5000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768e5c5d654be9e33f4b897f45602d9d0fec04d123a0f45d4bc27ef3b78e9629,PodSandboxId:9898c56b2710
dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:1,},Image:&ImageSpec{Image:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_RUNNING,CreatedAt:1707861514928437313,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},
&Container{Id:02537a0407812604525fe912de64bd8a1c1189904f86284966e2d782fa705c35,PodSandboxId:4edb48eb984f3d17c86e036476c120292a3c8d4c0dccd140de26d9b8176708d4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1707861512938294277,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1e93909-d75e-4377-be18-60377f7ce06d,},Annotations:map[string]string{io.kubernetes.container.hash: e30ff70a,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ae1d6929fafd0e4ff83dee0146c1834001af5389f5d345e585a9976ab28e34d,PodSandboxId:7e02128bd13f2f959be6dadfebaf054349a328f3274fd5ceb116fd2f2b3642d7,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1707861505925143649,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a9a9c7d-1310-4d9f-a91e-f4d49b7ac088,},Annotations:map[string]string{io.kubernetes.container.hash: a3ec2f9,io.kubernetes.container.restartC
ount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26d5d917bf79ca1db4205f8da5f62d489312141a152e163406ee3792bf201c93,PodSandboxId:6deba3bb4df129cd8614aaf1585852c584ad00737c7d5a9d784dcad35968fef8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1707861504155113411,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d05280-dffc-4b3e-87af-241451cc1cdc,},Annotations:map[string]string{io.kubernetes.container.hash: e4b
9663,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744aec35d59c6d98b63cc672bdd3c5f78c5cdb86544eb475074d963532a4b54c,PodSandboxId:682958d3615c642e8b85bdbeaec42dde0a3acc82936e3277fbe4566de2f2a5b6,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1707861502372620935,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-f89wf,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 4a792c70-a32f-4608-98ec-26b9c817b4f5,},Annotations:map[string]string{io.kubernetes.container.hash: 4217d917,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d,PodSandboxId:c3d217f26dc9911112f5114795416f04fcfcae71948497427090bd8d458c1e0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707861490189719519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 71d448ab-f65e-4d8e-8a5b-3b87db5f4d7f,},Annotations:map[string]string{io.kubernetes.container.hash: 5aac46dd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fc3d94fc7df67836414431947bc65141e5d91c06624c2b745de8125a6d24a31,PodSandboxId:0917fb0b972a3c91d43dda9e00d7bef468d73b1a6078cc6016968664991baf4d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1707861488719547163,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-gmcgl,io.kubernetes.pod.namespace: y
akd-dashboard,io.kubernetes.pod.uid: a1dd1624-7e81-4306-8d34-c020ef448cac,},Annotations:map[string]string{io.kubernetes.container.hash: 17ae4be3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:480474df46bfbadef1d32884ad0592dba599989897a33fe941d03cc96faa2c18,PodSandboxId:9898c56b2710dee8cba60d8958b1ddd38b62ef8de95b31560e27d7a0de32287b,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},},ImageRef:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,State:CONTAINER_EXITED,CreatedAt:17078614824021796
01,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-ghxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 723e578e-19de-4bcf-86ed-9de4ffbe5650,},Annotations:map[string]string{io.kubernetes.container.hash: bb8a8e41,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762,PodSandboxId:9a8b08e56b70486051d8992b8cdb759338ad23ed72ff6a23fc2a262e653d4f21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab931
3c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707861474956861847,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gkr4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ea7ce55-faee-4a44-a16d-98788c2932b6,},Annotations:map[string]string{io.kubernetes.container.hash: 5ffa6879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f,PodSandboxId:771a218b8b1a7a79096a35f27783409fc66d9ce91ea1c616200d900b2d34fbbb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Sta
te:CONTAINER_RUNNING,CreatedAt:1707861475970273947,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hlmz9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8da21de0-1ed2-4221-8e70-36bbe7832fe0,},Annotations:map[string]string{io.kubernetes.container.hash: 1a067e7d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4,PodSandboxId:a0279864e9eb6ef88019a7be55e648db22ce72b7a0a6482d40357ea9d60ef4a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6d
bc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707861446475558099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753c82c7870ea31d4181fa744c6910e0,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7,PodSandboxId:c57f992e5723d5d0bd68075b9a85e4a518869a980e60d9d8c87131c5f8a2ff39,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5
d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707861446099600263,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08a2489c6cce9f7b96056c2d8c264f4,},Annotations:map[string]string{io.kubernetes.container.hash: f75b68e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2482964b1a599ab2c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647,PodSandboxId:50ba5dbb8ab1871045312d70f79b9795d6949bace3587446f86d0af6a276f5c3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map
[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707861445850171842,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 976c886cb6512aaac367cb4d1401aa5e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a70a08e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b3356ee37bd713b4b312052b22b8c7b0acbabfc4b845f6856be0b7818dc5fd,PodSandboxId:31e157eb2fd3ed9fb0577095e296f09efd7ed64b429fa228b3dc67f8e0003bc8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[strin
g]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707861445826604097,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-548360,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0b9fef824614c7e96285e9f2336030,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=53f726a8-79c9-4619-a55d-db7669e84641 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	02b18dcccee0a       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          3 seconds ago        Exited              registry-test                            0                   a3e836c914df5       registry-test
	e4c001726db08       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          17 seconds ago       Running             csi-snapshotter                          0                   682958d3615c6       csi-hostpathplugin-f89wf
	1a48823c17572       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             18 seconds ago       Exited              patch                                    3                   6940a26e02f3c       gcp-auth-certs-patch-9t94c
	eb80cdf6a0b54       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          19 seconds ago       Running             csi-provisioner                          0                   682958d3615c6       csi-hostpathplugin-f89wf
	a5d5509c8a837       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 21 seconds ago       Running             gcp-auth                                 0                   950e8a5fbeda8       gcp-auth-d4c87556c-j7fcp
	05aa5c4be13e3       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            23 seconds ago       Running             liveness-probe                           0                   682958d3615c6       csi-hostpathplugin-f89wf
	4ea12f128374e       registry.k8s.io/ingress-nginx/controller@sha256:39608f8d250ced2afb4cbaff786f6ee269aeb494a3de5c5424c021b2af085d75                             24 seconds ago       Running             controller                               0                   1857afa97fb8b       ingress-nginx-controller-69cff4fd79-9cp25
	ecd7f1bcdc371       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           32 seconds ago       Running             hostpath                                 0                   682958d3615c6       csi-hostpathplugin-f89wf
	33a9360e24b5b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                35 seconds ago       Running             node-driver-registrar                    0                   682958d3615c6       csi-hostpathplugin-f89wf
	fa131601c5a65       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                                             35 seconds ago       Exited              patch                                    2                   6df2548c025b0       ingress-nginx-admission-patch-cjclt
	d413dfe614824       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385                   36 seconds ago       Exited              create                                   0                   038f551356f39       ingress-nginx-admission-create-xfjh9
	f67b899794da3       gcr.io/k8s-minikube/kube-registry-proxy@sha256:1965e593892b5c2c26ea37ddc6e7c5ed6896211078ca7e01ead479048f523bf5                              36 seconds ago       Running             registry-proxy                           0                   c201e09fce62a       registry-proxy-mfshx
	316017b48544e       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  42 seconds ago       Running             tiller                                   0                   0285162c468e9       tiller-deploy-7b677967b9-jn92b
	b48aaa6b69f06       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      45 seconds ago       Running             volume-snapshot-controller               0                   a5b24d9544aa9       snapshot-controller-58dbcc7b99-56xxb
	b3c6484ab3f7a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             45 seconds ago       Running             local-path-provisioner                   0                   281abeb2b2b61       local-path-provisioner-78b46b4d5c-lnqs8
	4816e9514181c       gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49                               47 seconds ago       Running             cloud-spanner-emulator                   0                   7e00a6ef70716       cloud-spanner-emulator-64c8c85f65-bwgbm
	86c9a67d54e84       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      51 seconds ago       Running             volume-snapshot-controller               0                   bf5586c764501       snapshot-controller-58dbcc7b99-8pfd2
	198d2794a749d       docker.io/library/registry@sha256:12202eb78732e22f8658d595bd6e3d47ef9f13ede78e94e90974c020c7d7c1b3                                           52 seconds ago       Running             registry                                 0                   71a6c802506a6       registry-75mmv
	768e5c5d654be       b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86                                                                             About a minute ago   Running             metrics-server                           1                   9898c56b2710d       metrics-server-69cf46c98-ghxhg
	02537a0407812       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   4edb48eb984f3       kube-ingress-dns-minikube
	4ae1d6929fafd       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   7e02128bd13f2       csi-hostpath-resizer-0
	26d5d917bf79c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   6deba3bb4df12       csi-hostpath-attacher-0
	744aec35d59c6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   682958d3615c6       csi-hostpathplugin-f89wf
	7ee2992936784       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   c3d217f26dc99       storage-provisioner
	0fc3d94fc7df6       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              About a minute ago   Running             yakd                                     0                   0917fb0b972a3       yakd-dashboard-9947fc6bf-gmcgl
	480474df46bfb       registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca                        About a minute ago   Exited              metrics-server                           0                   9898c56b2710d       metrics-server-69cf46c98-ghxhg
	15b39f73e0d38       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   771a218b8b1a7       coredns-5dd5756b68-hlmz9
	0192c0afa2f2c       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             About a minute ago   Running             kube-proxy                               0                   9a8b08e56b704       kube-proxy-gkr4l
	40244ef5b414f       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             2 minutes ago        Running             kube-scheduler                           0                   a0279864e9eb6       kube-scheduler-addons-548360
	8931f587c17a1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   c57f992e5723d       etcd-addons-548360
	2482964b1a599       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             2 minutes ago        Running             kube-apiserver                           0                   50ba5dbb8ab18       kube-apiserver-addons-548360
	d2b3356ee37bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             2 minutes ago        Running             kube-controller-manager                  0                   31e157eb2fd3e       kube-controller-manager-addons-548360
	
	
	==> coredns [15b39f73e0d381cf7b58193099cb3df5ea78a0436d842b158456497471b1120f] <==
	[INFO] 10.244.0.8:48378 - 8943 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000174407s
	[INFO] 10.244.0.8:50904 - 9396 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126394s
	[INFO] 10.244.0.8:50904 - 62902 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082608s
	[INFO] 10.244.0.8:38606 - 12576 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101778s
	[INFO] 10.244.0.8:38606 - 31522 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000512816s
	[INFO] 10.244.0.8:45046 - 2380 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000148576s
	[INFO] 10.244.0.8:45046 - 35394 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000065633s
	[INFO] 10.244.0.8:59262 - 29699 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000089241s
	[INFO] 10.244.0.8:59262 - 31806 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042554s
	[INFO] 10.244.0.8:42648 - 23391 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034886s
	[INFO] 10.244.0.8:42648 - 7265 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032557s
	[INFO] 10.244.0.8:41361 - 17775 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046609s
	[INFO] 10.244.0.8:41361 - 36193 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043624s
	[INFO] 10.244.0.8:56881 - 64064 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000037926s
	[INFO] 10.244.0.8:56881 - 32577 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000049081s
	[INFO] 10.244.0.21:37152 - 18992 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276141s
	[INFO] 10.244.0.21:39878 - 2314 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000352264s
	[INFO] 10.244.0.21:54655 - 40748 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000240623s
	[INFO] 10.244.0.21:37954 - 36720 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077009s
	[INFO] 10.244.0.21:56454 - 34022 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066434s
	[INFO] 10.244.0.21:40796 - 51297 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000053638s
	[INFO] 10.244.0.21:39708 - 12368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000765668s
	[INFO] 10.244.0.21:46744 - 14520 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.001243228s
	[INFO] 10.244.0.22:59197 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000461553s
	[INFO] 10.244.0.22:43708 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000185532s
	
	
	==> describe nodes <==
	Name:               addons-548360
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-548360
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=addons-548360
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T21_57_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-548360
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-548360"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 21:57:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-548360
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 21:59:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 21:59:07 +0000   Tue, 13 Feb 2024 21:57:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 21:59:07 +0000   Tue, 13 Feb 2024 21:57:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 21:59:07 +0000   Tue, 13 Feb 2024 21:57:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 21:59:07 +0000   Tue, 13 Feb 2024 21:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    addons-548360
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ddcf3dbe0b24be5a4bc22610392b9da
	  System UUID:                3ddcf3db-e0b2-4be5-a4bc-22610392b9da
	  Boot ID:                    62459984-65af-4c5d-860c-ddc2dcffdbef
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-bwgbm                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  gcp-auth                    gcp-auth-d4c87556c-j7fcp                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-9cp25                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         102s
	  kube-system                 coredns-5dd5756b68-hlmz9                                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     110s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 csi-hostpathplugin-f89wf                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 etcd-addons-548360                                            100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m6s
	  kube-system                 helm-test                                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kube-apiserver-addons-548360                                  250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 kube-controller-manager-addons-548360                         200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-proxy-gkr4l                                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-scheduler-addons-548360                                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  kube-system                 metrics-server-69cf46c98-ghxhg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 registry-75mmv                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 snapshot-controller-58dbcc7b99-56xxb                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 snapshot-controller-58dbcc7b99-8pfd2                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 tiller-deploy-7b677967b9-jn92b                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  local-path-storage          helper-pod-create-pvc-94c1659d-c197-459f-ae81-0c70edc6f082    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  local-path-storage          local-path-provisioner-78b46b4d5c-lnqs8                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-gmcgl                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 90s                    kube-proxy       
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node addons-548360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node addons-548360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node addons-548360 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node addons-548360 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node addons-548360 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node addons-548360 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m4s                   kubelet          Node addons-548360 status is now: NodeReady
	  Normal  RegisteredNode           112s                   node-controller  Node addons-548360 event: Registered Node addons-548360 in Controller
	
	
	==> dmesg <==
	[Feb13 21:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.098861] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.528837] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.708012] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150386] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Feb13 21:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.917003] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.112763] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[  +0.151138] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.110894] systemd-fstab-generator[675]: Ignoring "noauto" for root device
	[  +0.218069] systemd-fstab-generator[699]: Ignoring "noauto" for root device
	[  +9.823024] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[ +10.251975] systemd-fstab-generator[1243]: Ignoring "noauto" for root device
	[ +21.897547] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 21:58] kauditd_printk_skb: 35 callbacks suppressed
	[ +24.955625] kauditd_printk_skb: 16 callbacks suppressed
	[ +16.954485] kauditd_printk_skb: 16 callbacks suppressed
	[Feb13 21:59] kauditd_printk_skb: 34 callbacks suppressed
	[ +21.540681] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.207117] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8931f587c17a1c5009c93f97a932ded0a7d64aba1c1461a563d62d78da95a9d7] <==
	{"level":"info","ts":"2024-02-13T21:58:59.601239Z","caller":"traceutil/trace.go:171","msg":"trace[483910184] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"434.178854ms","start":"2024-02-13T21:58:59.167046Z","end":"2024-02-13T21:58:59.601225Z","steps":["trace[483910184] 'process raft request'  (duration: 434.036344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:58:59.601373Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:58:59.167031Z","time spent":"434.285686ms","remote":"127.0.0.1:56770","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1095 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-02-13T21:58:59.601723Z","caller":"traceutil/trace.go:171","msg":"trace[1757431256] linearizableReadLoop","detail":"{readStateIndex:1127; appliedIndex:1127; }","duration":"409.377109ms","start":"2024-02-13T21:58:59.192338Z","end":"2024-02-13T21:58:59.601715Z","steps":["trace[1757431256] 'read index received'  (duration: 409.37452ms)","trace[1757431256] 'applied index is now lower than readState.Index'  (duration: 2.11µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T21:58:59.601919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"408.450584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82283"}
	{"level":"info","ts":"2024-02-13T21:58:59.601972Z","caller":"traceutil/trace.go:171","msg":"trace[718108313] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1096; }","duration":"408.511978ms","start":"2024-02-13T21:58:59.193454Z","end":"2024-02-13T21:58:59.601966Z","steps":["trace[718108313] 'agreement among raft nodes before linearized reading'  (duration: 408.341346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:58:59.602085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:58:59.193382Z","time spent":"408.69436ms","remote":"127.0.0.1:56774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":82307,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-02-13T21:58:59.602284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.963707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14030"}
	{"level":"info","ts":"2024-02-13T21:58:59.602335Z","caller":"traceutil/trace.go:171","msg":"trace[1449650380] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1096; }","duration":"410.014435ms","start":"2024-02-13T21:58:59.192314Z","end":"2024-02-13T21:58:59.602328Z","steps":["trace[1449650380] 'agreement among raft nodes before linearized reading'  (duration: 409.925667ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:58:59.602356Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-02-13T21:58:59.1923Z","time spent":"410.049997ms","remote":"127.0.0.1:56774","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14054,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-02-13T21:58:59.602815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.476495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82283"}
	{"level":"info","ts":"2024-02-13T21:58:59.602874Z","caller":"traceutil/trace.go:171","msg":"trace[1467028126] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1096; }","duration":"265.541409ms","start":"2024-02-13T21:58:59.337325Z","end":"2024-02-13T21:58:59.602866Z","steps":["trace[1467028126] 'agreement among raft nodes before linearized reading'  (duration: 265.178369ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:58:59.603033Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.746919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11250"}
	{"level":"info","ts":"2024-02-13T21:58:59.603056Z","caller":"traceutil/trace.go:171","msg":"trace[1154841573] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1096; }","duration":"274.772585ms","start":"2024-02-13T21:58:59.328276Z","end":"2024-02-13T21:58:59.603049Z","steps":["trace[1154841573] 'agreement among raft nodes before linearized reading'  (duration: 274.711098ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:07.812476Z","caller":"traceutil/trace.go:171","msg":"trace[2024106671] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"190.697402ms","start":"2024-02-13T21:59:07.621763Z","end":"2024-02-13T21:59:07.812461Z","steps":["trace[2024106671] 'process raft request'  (duration: 190.555882ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:07.820547Z","caller":"traceutil/trace.go:171","msg":"trace[646463291] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"181.380432ms","start":"2024-02-13T21:59:07.639153Z","end":"2024-02-13T21:59:07.820533Z","steps":["trace[646463291] 'process raft request'  (duration: 180.910336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:07.821626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.263213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13864"}
	{"level":"info","ts":"2024-02-13T21:59:07.821662Z","caller":"traceutil/trace.go:171","msg":"trace[446632817] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1163; }","duration":"128.340898ms","start":"2024-02-13T21:59:07.693313Z","end":"2024-02-13T21:59:07.821654Z","steps":["trace[446632817] 'agreement among raft nodes before linearized reading'  (duration: 128.207149ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:07.830637Z","caller":"traceutil/trace.go:171","msg":"trace[514868235] linearizableReadLoop","detail":"{readStateIndex:1195; appliedIndex:1193; }","duration":"126.871603ms","start":"2024-02-13T21:59:07.693336Z","end":"2024-02-13T21:59:07.820208Z","steps":["trace[514868235] 'read index received'  (duration: 118.892765ms)","trace[514868235] 'applied index is now lower than readState.Index'  (duration: 7.977994ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-13T21:59:12.072317Z","caller":"traceutil/trace.go:171","msg":"trace[736329166] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"180.54658ms","start":"2024-02-13T21:59:11.891749Z","end":"2024-02-13T21:59:12.072296Z","steps":["trace[736329166] 'process raft request'  (duration: 179.479713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T21:59:17.947817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.888294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82374"}
	{"level":"info","ts":"2024-02-13T21:59:17.947897Z","caller":"traceutil/trace.go:171","msg":"trace[659305666] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1212; }","duration":"109.986646ms","start":"2024-02-13T21:59:17.837899Z","end":"2024-02-13T21:59:17.947886Z","steps":["trace[659305666] 'range keys from in-memory index tree'  (duration: 109.698927ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:36.959714Z","caller":"traceutil/trace.go:171","msg":"trace[912198313] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1355; }","duration":"121.003672ms","start":"2024-02-13T21:59:36.838699Z","end":"2024-02-13T21:59:36.959703Z","steps":["trace[912198313] 'process raft request'  (duration: 120.529885ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T21:59:36.959612Z","caller":"traceutil/trace.go:171","msg":"trace[2020295862] linearizableReadLoop","detail":"{readStateIndex:1395; appliedIndex:1394; }","duration":"111.727126ms","start":"2024-02-13T21:59:36.847642Z","end":"2024-02-13T21:59:36.959369Z","steps":["trace[2020295862] 'read index received'  (duration: 111.538356ms)","trace[2020295862] 'applied index is now lower than readState.Index'  (duration: 187.936µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T21:59:36.960201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.522936ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T21:59:36.960322Z","caller":"traceutil/trace.go:171","msg":"trace[896758103] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1355; }","duration":"112.692499ms","start":"2024-02-13T21:59:36.847616Z","end":"2024-02-13T21:59:36.960309Z","steps":["trace[896758103] 'agreement among raft nodes before linearized reading'  (duration: 112.137065ms)"],"step_count":1}
	
	
	==> gcp-auth [a5d5509c8a837434627f6c2b1a873b69e19c39b36887d9f5750d80a203e2202b] <==
	2024/02/13 21:59:16 GCP Auth Webhook started!
	2024/02/13 21:59:31 Ready to marshal response ...
	2024/02/13 21:59:31 Ready to write response ...
	2024/02/13 21:59:32 Ready to marshal response ...
	2024/02/13 21:59:32 Ready to write response ...
	2024/02/13 21:59:33 Ready to marshal response ...
	2024/02/13 21:59:33 Ready to write response ...
	2024/02/13 21:59:34 Ready to marshal response ...
	2024/02/13 21:59:34 Ready to write response ...
	2024/02/13 21:59:38 Ready to marshal response ...
	2024/02/13 21:59:38 Ready to write response ...
	
	
	==> kernel <==
	 21:59:38 up 2 min,  0 users,  load average: 4.59, 2.48, 0.97
	Linux addons-548360 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2482964b1a599ab2c37b08ea26c43d8cacc431a054a383cfa9547f5dd6b64647] <==
	W0213 21:57:58.140557       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0213 21:57:59.374335       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 21:58:00.715971       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 21:58:00.990320       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.240.216"}
	I0213 21:58:05.716828       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 21:58:30.732692       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0213 21:58:38.766834       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.171.9:443: connect: connection refused
	W0213 21:58:38.770330       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 21:58:38.785938       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0213 21:58:38.803038       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.171.9:443: connect: connection refused
	I0213 21:58:38.803300       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0213 21:58:38.806930       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.171.9:443: connect: connection refused
	E0213 21:58:38.815151       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.171.9:443: connect: connection refused
	E0213 21:58:38.856831       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1: Get "https://10.98.171.9:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.98.171.9:443: connect: connection refused
	I0213 21:58:39.063022       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0213 21:59:28.126926       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc00b65e0c0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc008d1e0f0), ResponseWriter:(*httpsnoop.rw)(0xc008d1e0f0), Flusher:(*httpsnoop.rw)(0xc008d1e0f0), CloseNotifier:(*httpsnoop.rw)(0xc008d1e0f0), Pusher:(*httpsnoop.rw)(0xc008d1e0f0)}}, encoder:(*versioning.codec)(0xc00e933ea0), memAllocator:(*runtime.Allocator)(0xc009b439e0)})
	I0213 21:59:28.155262       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0213 21:59:28.170597       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0213 21:59:29.193634       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0213 21:59:30.734001       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 21:59:37.826714       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0213 21:59:38.417902       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.228.30"}
	
	
	==> kube-controller-manager [d2b3356ee37bd713b4b312052b22b8c7b0acbabfc4b845f6856be0b7818dc5fd] <==
	I0213 21:59:19.933009       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:19.938774       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:20.908671       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:20.966373       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:21.029443       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:21.920036       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:21.976234       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:21.987588       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:21.995797       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0213 21:59:21.996262       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0213 21:59:25.045182       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0213 21:59:25.082118       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	E0213 21:59:29.195863       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0213 21:59:30.234283       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 21:59:30.234480       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 21:59:31.151807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="24.793706ms"
	I0213 21:59:31.152171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="201.769µs"
	W0213 21:59:32.709227       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 21:59:32.709297       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 21:59:33.889037       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0213 21:59:34.061913       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0213 21:59:36.963069       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="20.258µs"
	W0213 21:59:37.772237       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0213 21:59:37.772371       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0213 21:59:38.346639       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	
	
	==> kube-proxy [0192c0afa2f2cdf8e6a3d6594df43eb1de3de8969fd3f276becb8ecc0b73a762] <==
	I0213 21:58:05.871165       1 server_others.go:69] "Using iptables proxy"
	I0213 21:58:06.166006       1 node.go:141] Successfully retrieved node IP: 192.168.39.217
	I0213 21:58:07.481140       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 21:58:07.481186       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 21:58:07.756118       1 server_others.go:152] "Using iptables Proxier"
	I0213 21:58:07.756219       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 21:58:07.756513       1 server.go:846] "Version info" version="v1.28.4"
	I0213 21:58:07.756712       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 21:58:07.867543       1 config.go:188] "Starting service config controller"
	I0213 21:58:07.867616       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 21:58:07.867657       1 config.go:97] "Starting endpoint slice config controller"
	I0213 21:58:07.867664       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 21:58:07.890498       1 config.go:315] "Starting node config controller"
	I0213 21:58:07.890593       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 21:58:08.276619       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 21:58:08.371098       1 shared_informer.go:318] Caches are synced for service config
	I0213 21:58:08.391154       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [40244ef5b414f74bcf1bd28e05e719065595e00d75ce3edc136c06232364f3d4] <==
	W0213 21:57:31.811368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:31.811567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:31.843618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 21:57:31.843674       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 21:57:31.970694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 21:57:31.970845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 21:57:32.113556       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 21:57:32.113688       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 21:57:32.129734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 21:57:32.129934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 21:57:32.203635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:32.203702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:32.220353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 21:57:32.220512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 21:57:32.254286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 21:57:32.254380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 21:57:32.295786       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 21:57:32.295849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 21:57:32.297714       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 21:57:32.297856       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 21:57:32.305867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 21:57:32.306217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 21:57:32.327036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 21:57:32.327125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0213 21:57:33.936970       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 21:56:58 UTC, ends at Tue 2024-02-13 21:59:39 UTC. --
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.094593    1250 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rs2\" (UniqueName: \"kubernetes.io/projected/a146cfb0-9524-40f7-8bab-91a56de079a4-kube-api-access-d2rs2\") pod \"a146cfb0-9524-40f7-8bab-91a56de079a4\" (UID: \"a146cfb0-9524-40f7-8bab-91a56de079a4\") "
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.117970    1250 scope.go:117] "RemoveContainer" containerID="f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.133043    1250 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a146cfb0-9524-40f7-8bab-91a56de079a4-kube-api-access-d2rs2" (OuterVolumeSpecName: "kube-api-access-d2rs2") pod "a146cfb0-9524-40f7-8bab-91a56de079a4" (UID: "a146cfb0-9524-40f7-8bab-91a56de079a4"). InnerVolumeSpecName "kube-api-access-d2rs2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.158695    1250 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.196677    1250 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d2rs2\" (UniqueName: \"kubernetes.io/projected/a146cfb0-9524-40f7-8bab-91a56de079a4-kube-api-access-d2rs2\") on node \"addons-548360\" DevicePath \"\""
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.277317    1250 topology_manager.go:215] "Topology Admit Handler" podUID="cdef0026-04e2-4f2d-a0be-076dce5a611b" podNamespace="default" podName="nginx"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: E0213 21:59:38.277787    1250 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cffa27a8-9bfe-47ba-8749-6ec7d781d992" containerName="registry-test"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: E0213 21:59:38.277900    1250 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dad71134-5cc3-4fa4-b391-4a08b89d5d04" containerName="registry-proxy"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: E0213 21:59:38.277917    1250 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a146cfb0-9524-40f7-8bab-91a56de079a4" containerName="registry"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.278036    1250 memory_manager.go:346] "RemoveStaleState removing state" podUID="cffa27a8-9bfe-47ba-8749-6ec7d781d992" containerName="registry-test"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.278136    1250 memory_manager.go:346] "RemoveStaleState removing state" podUID="dad71134-5cc3-4fa4-b391-4a08b89d5d04" containerName="registry-proxy"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.278155    1250 memory_manager.go:346] "RemoveStaleState removing state" podUID="a146cfb0-9524-40f7-8bab-91a56de079a4" containerName="registry"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.374795    1250 scope.go:117] "RemoveContainer" containerID="f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: E0213 21:59:38.375539    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed\": container with ID starting with f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed not found: ID does not exist" containerID="f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.375584    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed"} err="failed to get container status \"f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed\": rpc error: code = NotFound desc = could not find container \"f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed\": container with ID starting with f67b899794da38daf64cdf562a111073d2edf47e3fa1fd48b04eb0f6c750a1ed not found: ID does not exist"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.375596    1250 scope.go:117] "RemoveContainer" containerID="198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.398885    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cdef0026-04e2-4f2d-a0be-076dce5a611b-gcp-creds\") pod \"nginx\" (UID: \"cdef0026-04e2-4f2d-a0be-076dce5a611b\") " pod="default/nginx"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.399025    1250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgdz6\" (UniqueName: \"kubernetes.io/projected/cdef0026-04e2-4f2d-a0be-076dce5a611b-kube-api-access-dgdz6\") pod \"nginx\" (UID: \"cdef0026-04e2-4f2d-a0be-076dce5a611b\") " pod="default/nginx"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.438156    1250 scope.go:117] "RemoveContainer" containerID="198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: E0213 21:59:38.438814    1250 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d\": container with ID starting with 198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d not found: ID does not exist" containerID="198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.438959    1250 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d"} err="failed to get container status \"198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d\": rpc error: code = NotFound desc = could not find container \"198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d\": container with ID starting with 198d2794a749d75c2d3f92251fd5ea15c3a41b1c8c3335df94f6a2507008c63d not found: ID does not exist"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.483930    1250 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/helm-test" podStartSLOduration=3.516901189 podCreationTimestamp="2024-02-13 21:59:32 +0000 UTC" firstStartedPulling="2024-02-13 21:59:34.280381885 +0000 UTC m=+119.910147114" lastFinishedPulling="2024-02-13 21:59:37.247355341 +0000 UTC m=+122.877120577" observedRunningTime="2024-02-13 21:59:38.457559835 +0000 UTC m=+124.087325085" watchObservedRunningTime="2024-02-13 21:59:38.483874652 +0000 UTC m=+124.113639901"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.564123    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a146cfb0-9524-40f7-8bab-91a56de079a4" path="/var/lib/kubelet/pods/a146cfb0-9524-40f7-8bab-91a56de079a4/volumes"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: I0213 21:59:38.564680    1250 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dad71134-5cc3-4fa4-b391-4a08b89d5d04" path="/var/lib/kubelet/pods/dad71134-5cc3-4fa4-b391-4a08b89d5d04/volumes"
	Feb 13 21:59:38 addons-548360 kubelet[1250]: E0213 21:59:38.577931    1250 remote_runtime.go:557] "Attach container from runtime service failed" err="rpc error: code = Unknown desc = unable to prepare attach endpoint" containerID="bfb11d5133aee6d3fbe2c0cd84c1e83b06f62325ad9dd6642c85ea7f683e5c43"
	
	
	==> storage-provisioner [7ee29929367848ed1e1f63ffbac1ba38ec623b8819eaac7552b57e96f60c605d] <==
	I0213 21:58:11.141648       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 21:58:11.193917       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 21:58:11.198643       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 21:58:11.253381       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 21:58:11.253836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-548360_d6e6ba67-4e49-493b-a609-63b5d27abbe7!
	I0213 21:58:11.284595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"776e0c08-5210-4cad-a814-b6a72b9380a1", APIVersion:"v1", ResourceVersion:"887", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-548360_d6e6ba67-4e49-493b-a609-63b5d27abbe7 became leader
	I0213 21:58:11.463268       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-548360_d6e6ba67-4e49-493b-a609-63b5d27abbe7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-548360 -n addons-548360
helpers_test.go:261: (dbg) Run:  kubectl --context addons-548360 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path gcp-auth-certs-patch-9t94c ingress-nginx-admission-create-xfjh9 ingress-nginx-admission-patch-cjclt helper-pod-create-pvc-94c1659d-c197-459f-ae81-0c70edc6f082
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-548360 describe pod nginx test-local-path gcp-auth-certs-patch-9t94c ingress-nginx-admission-create-xfjh9 ingress-nginx-admission-patch-cjclt helper-pod-create-pvc-94c1659d-c197-459f-ae81-0c70edc6f082
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-548360 describe pod nginx test-local-path gcp-auth-certs-patch-9t94c ingress-nginx-admission-create-xfjh9 ingress-nginx-admission-patch-cjclt helper-pod-create-pvc-94c1659d-c197-459f-ae81-0c70edc6f082: exit status 1 (104.653586ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-548360/192.168.39.217
	Start Time:       Tue, 13 Feb 2024 21:59:38 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dgdz6 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-dgdz6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-548360
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qn7p8 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-qn7p8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-9t94c" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-xfjh9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cjclt" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-94c1659d-c197-459f-ae81-0c70edc6f082" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-548360 describe pod nginx test-local-path gcp-auth-certs-patch-9t94c ingress-nginx-admission-create-xfjh9 ingress-nginx-admission-patch-cjclt helper-pod-create-pvc-94c1659d-c197-459f-ae81-0c70edc6f082: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-548360
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-548360: exit status 82 (2m0.282150283s)

                                                
                                                
-- stdout --
	* Stopping node "addons-548360"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-548360" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-548360
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-548360: exit status 11 (21.490059516s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-548360" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-548360
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-548360: exit status 11 (6.14354698s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-548360" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-548360
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-548360: exit status 11 (6.143817043s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-548360" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0213 22:09:23.974181   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.027911493s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-407129
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image load --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image load --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr: (5.432505285s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image ls: (2.399502136s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-407129" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (170.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-741217 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0213 22:12:05.257632   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-741217 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.875741161s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-741217 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-741217 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d14fae32-392d-4e2b-b4b0-41613ce45b7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d14fae32-392d-4e2b-b4b0-41613ce45b7f] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.003578451s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0213 22:14:11.137933   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.143197   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.153466   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.173771   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.214071   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.294422   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.455090   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:11.775721   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:12.416670   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:13.697259   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:16.257578   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:21.378589   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:14:21.413989   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-741217 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.45445604s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-741217 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
E0213 22:14:31.619671   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.71
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons disable ingress-dns --alsologtostderr -v=1: (13.044122774s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons disable ingress --alsologtostderr -v=1
E0213 22:14:49.100679   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:14:52.100524   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons disable ingress --alsologtostderr -v=1: (7.580979794s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-741217 -n ingress-addon-legacy-741217
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-741217 logs -n 25: (1.242562373s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-407129                                                  | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-407129                                                  | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| mount          | -p functional-407129                                                  | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| ssh            | functional-407129 ssh findmnt                                         | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | -T /mount2                                                            |                             |         |         |                     |                     |
	| ssh            | functional-407129 ssh findmnt                                         | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | -T /mount3                                                            |                             |         |         |                     |                     |
	| mount          | -p functional-407129                                                  | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC |                     |
	|                | --kill=true                                                           |                             |         |         |                     |                     |
	| update-context | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| update-context | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | update-context                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                             |         |         |                     |                     |
	| image          | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | image ls --format short                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| service        | functional-407129 service                                             | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | hello-node-connect --url                                              |                             |         |         |                     |                     |
	| image          | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | image ls --format yaml                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| ssh            | functional-407129 ssh pgrep                                           | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC |                     |
	|                | buildkitd                                                             |                             |         |         |                     |                     |
	| image          | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | image ls --format json                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-407129 image build -t                                      | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:10 UTC |
	|                | localhost/my-image:functional-407129                                  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                             |         |         |                     |                     |
	| image          | functional-407129                                                     | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:09 UTC | 13 Feb 24 22:09 UTC |
	|                | image ls --format table                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	| image          | functional-407129 image ls                                            | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:10 UTC | 13 Feb 24 22:10 UTC |
	| delete         | -p functional-407129                                                  | functional-407129           | jenkins | v1.32.0 | 13 Feb 24 22:10 UTC | 13 Feb 24 22:10 UTC |
	| start          | -p ingress-addon-legacy-741217                                        | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:10 UTC | 13 Feb 24 22:11 UTC |
	|                | --kubernetes-version=v1.18.20                                         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                    |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                              |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-741217                                           | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:11 UTC | 13 Feb 24 22:12 UTC |
	|                | addons enable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-741217                                           | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:12 UTC | 13 Feb 24 22:12 UTC |
	|                | addons enable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-741217                                           | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:12 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-741217 ip                                        | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:14 UTC | 13 Feb 24 22:14 UTC |
	| addons         | ingress-addon-legacy-741217                                           | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:14 UTC | 13 Feb 24 22:14 UTC |
	|                | addons disable ingress-dns                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-741217                                           | ingress-addon-legacy-741217 | jenkins | v1.32.0 | 13 Feb 24 22:14 UTC | 13 Feb 24 22:14 UTC |
	|                | addons disable ingress                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                             |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 22:10:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 22:10:02.529714   25250 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:10:02.529973   25250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:10:02.529983   25250 out.go:304] Setting ErrFile to fd 2...
	I0213 22:10:02.529988   25250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:10:02.530179   25250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 22:10:02.530739   25250 out.go:298] Setting JSON to false
	I0213 22:10:02.531750   25250 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3154,"bootTime":1707859049,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:10:02.531813   25250 start.go:138] virtualization: kvm guest
	I0213 22:10:02.534007   25250 out.go:177] * [ingress-addon-legacy-741217] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 22:10:02.535482   25250 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:10:02.535495   25250 notify.go:220] Checking for updates...
	I0213 22:10:02.538053   25250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:10:02.539263   25250 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:10:02.540483   25250 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:10:02.541797   25250 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:10:02.543054   25250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:10:02.544592   25250 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:10:02.580870   25250 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 22:10:02.582590   25250 start.go:298] selected driver: kvm2
	I0213 22:10:02.582606   25250 start.go:902] validating driver "kvm2" against <nil>
	I0213 22:10:02.582617   25250 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:10:02.583355   25250 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 22:10:02.583438   25250 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 22:10:02.598377   25250 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 22:10:02.598438   25250 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 22:10:02.598656   25250 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 22:10:02.598727   25250 cni.go:84] Creating CNI manager for ""
	I0213 22:10:02.598747   25250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 22:10:02.598762   25250 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 22:10:02.598778   25250 start_flags.go:321] config:
	{Name:ingress-addon-legacy-741217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-741217 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:10:02.598969   25250 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 22:10:02.601771   25250 out.go:177] * Starting control plane node ingress-addon-legacy-741217 in cluster ingress-addon-legacy-741217
	I0213 22:10:02.603083   25250 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 22:10:02.625432   25250 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0213 22:10:02.625480   25250 cache.go:56] Caching tarball of preloaded images
	I0213 22:10:02.625655   25250 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 22:10:02.627451   25250 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0213 22:10:02.629023   25250 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0213 22:10:02.661201   25250 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0213 22:10:07.550520   25250 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0213 22:10:07.550618   25250 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0213 22:10:08.540212   25250 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0213 22:10:08.540526   25250 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/config.json ...
	I0213 22:10:08.540553   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/config.json: {Name:mkc891c76e99c27b0e03132d1e1ee72b9ca4ec4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:08.540720   25250 start.go:365] acquiring machines lock for ingress-addon-legacy-741217: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 22:10:08.540751   25250 start.go:369] acquired machines lock for "ingress-addon-legacy-741217" in 16.314µs
	I0213 22:10:08.540765   25250 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-741217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-741217 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 22:10:08.540842   25250 start.go:125] createHost starting for "" (driver="kvm2")
	I0213 22:10:08.543789   25250 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0213 22:10:08.543969   25250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:10:08.544012   25250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:10:08.557568   25250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
	I0213 22:10:08.557953   25250 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:10:08.558498   25250 main.go:141] libmachine: Using API Version  1
	I0213 22:10:08.558521   25250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:10:08.558867   25250 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:10:08.559060   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetMachineName
	I0213 22:10:08.559228   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:08.559388   25250 start.go:159] libmachine.API.Create for "ingress-addon-legacy-741217" (driver="kvm2")
	I0213 22:10:08.559419   25250 client.go:168] LocalClient.Create starting
	I0213 22:10:08.559455   25250 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem
	I0213 22:10:08.559489   25250 main.go:141] libmachine: Decoding PEM data...
	I0213 22:10:08.559511   25250 main.go:141] libmachine: Parsing certificate...
	I0213 22:10:08.559583   25250 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem
	I0213 22:10:08.559609   25250 main.go:141] libmachine: Decoding PEM data...
	I0213 22:10:08.559627   25250 main.go:141] libmachine: Parsing certificate...
	I0213 22:10:08.559655   25250 main.go:141] libmachine: Running pre-create checks...
	I0213 22:10:08.559670   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .PreCreateCheck
	I0213 22:10:08.559958   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetConfigRaw
	I0213 22:10:08.560311   25250 main.go:141] libmachine: Creating machine...
	I0213 22:10:08.560325   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .Create
	I0213 22:10:08.560445   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Creating KVM machine...
	I0213 22:10:08.561775   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found existing default KVM network
	I0213 22:10:08.562403   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:08.562268   25284 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a10}
	I0213 22:10:08.567649   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | trying to create private KVM network mk-ingress-addon-legacy-741217 192.168.39.0/24...
	I0213 22:10:08.640457   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting up store path in /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217 ...
	I0213 22:10:08.640503   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | private KVM network mk-ingress-addon-legacy-741217 192.168.39.0/24 created
	I0213 22:10:08.640518   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Building disk image from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 22:10:08.640543   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Downloading /home/jenkins/minikube-integration/18171-8990/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0213 22:10:08.640568   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:08.640359   25284 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:10:08.842373   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:08.842240   25284 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa...
	I0213 22:10:09.023495   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:09.023329   25284 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/ingress-addon-legacy-741217.rawdisk...
	I0213 22:10:09.023535   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Writing magic tar header
	I0213 22:10:09.023558   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Writing SSH key tar header
	I0213 22:10:09.023571   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:09.023447   25284 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217 ...
	I0213 22:10:09.023591   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217
	I0213 22:10:09.023602   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines
	I0213 22:10:09.023619   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217 (perms=drwx------)
	I0213 22:10:09.023634   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines (perms=drwxr-xr-x)
	I0213 22:10:09.023647   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:10:09.023665   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube (perms=drwxr-xr-x)
	I0213 22:10:09.023689   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990 (perms=drwxrwxr-x)
	I0213 22:10:09.023705   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0213 22:10:09.023720   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990
	I0213 22:10:09.023740   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0213 22:10:09.023755   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home/jenkins
	I0213 22:10:09.023768   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0213 22:10:09.023783   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Checking permissions on dir: /home
	I0213 22:10:09.023801   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Skipping /home - not owner
	I0213 22:10:09.023833   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Creating domain...
	I0213 22:10:09.024803   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) define libvirt domain using xml: 
	I0213 22:10:09.024839   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) <domain type='kvm'>
	I0213 22:10:09.024853   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <name>ingress-addon-legacy-741217</name>
	I0213 22:10:09.024874   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <memory unit='MiB'>4096</memory>
	I0213 22:10:09.024890   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <vcpu>2</vcpu>
	I0213 22:10:09.024904   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <features>
	I0213 22:10:09.024918   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <acpi/>
	I0213 22:10:09.024932   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <apic/>
	I0213 22:10:09.024947   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <pae/>
	I0213 22:10:09.024964   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     
	I0213 22:10:09.024980   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   </features>
	I0213 22:10:09.024994   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <cpu mode='host-passthrough'>
	I0213 22:10:09.025008   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   
	I0213 22:10:09.025021   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   </cpu>
	I0213 22:10:09.025035   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <os>
	I0213 22:10:09.025056   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <type>hvm</type>
	I0213 22:10:09.025071   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <boot dev='cdrom'/>
	I0213 22:10:09.025082   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <boot dev='hd'/>
	I0213 22:10:09.025098   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <bootmenu enable='no'/>
	I0213 22:10:09.025111   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   </os>
	I0213 22:10:09.025126   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   <devices>
	I0213 22:10:09.025144   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <disk type='file' device='cdrom'>
	I0213 22:10:09.025167   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/boot2docker.iso'/>
	I0213 22:10:09.025183   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <target dev='hdc' bus='scsi'/>
	I0213 22:10:09.025198   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <readonly/>
	I0213 22:10:09.025211   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </disk>
	I0213 22:10:09.025225   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <disk type='file' device='disk'>
	I0213 22:10:09.025244   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0213 22:10:09.025277   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/ingress-addon-legacy-741217.rawdisk'/>
	I0213 22:10:09.025293   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <target dev='hda' bus='virtio'/>
	I0213 22:10:09.025306   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </disk>
	I0213 22:10:09.025322   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <interface type='network'>
	I0213 22:10:09.025337   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <source network='mk-ingress-addon-legacy-741217'/>
	I0213 22:10:09.025351   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <model type='virtio'/>
	I0213 22:10:09.025366   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </interface>
	I0213 22:10:09.025381   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <interface type='network'>
	I0213 22:10:09.025399   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <source network='default'/>
	I0213 22:10:09.025413   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <model type='virtio'/>
	I0213 22:10:09.025442   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </interface>
	I0213 22:10:09.025470   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <serial type='pty'>
	I0213 22:10:09.025502   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <target port='0'/>
	I0213 22:10:09.025533   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </serial>
	I0213 22:10:09.025554   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <console type='pty'>
	I0213 22:10:09.025573   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <target type='serial' port='0'/>
	I0213 22:10:09.025589   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </console>
	I0213 22:10:09.025602   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     <rng model='virtio'>
	I0213 22:10:09.025619   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)       <backend model='random'>/dev/random</backend>
	I0213 22:10:09.025632   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     </rng>
	I0213 22:10:09.025650   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     
	I0213 22:10:09.025668   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)     
	I0213 22:10:09.025683   25250 main.go:141] libmachine: (ingress-addon-legacy-741217)   </devices>
	I0213 22:10:09.025695   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) </domain>
	I0213 22:10:09.025712   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) 
	I0213 22:10:09.030037   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:8a:f0:a4 in network default
	I0213 22:10:09.030557   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Ensuring networks are active...
	I0213 22:10:09.030581   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:09.031307   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Ensuring network default is active
	I0213 22:10:09.031618   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Ensuring network mk-ingress-addon-legacy-741217 is active
	I0213 22:10:09.032193   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Getting domain xml...
	I0213 22:10:09.032867   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Creating domain...
	I0213 22:10:10.236996   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Waiting to get IP...
	I0213 22:10:10.237751   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:10.238176   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:10.238204   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:10.238149   25284 retry.go:31] will retry after 197.63814ms: waiting for machine to come up
	I0213 22:10:10.437791   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:10.438201   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:10.438225   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:10.438169   25284 retry.go:31] will retry after 366.374497ms: waiting for machine to come up
	I0213 22:10:10.805713   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:10.806123   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:10.806157   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:10.806077   25284 retry.go:31] will retry after 387.330305ms: waiting for machine to come up
	I0213 22:10:11.194607   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:11.194950   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:11.194977   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:11.194915   25284 retry.go:31] will retry after 503.204654ms: waiting for machine to come up
	I0213 22:10:11.699514   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:11.700038   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:11.700070   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:11.699993   25284 retry.go:31] will retry after 674.931554ms: waiting for machine to come up
	I0213 22:10:12.376803   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:12.377276   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:12.377300   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:12.377245   25284 retry.go:31] will retry after 650.503016ms: waiting for machine to come up
	I0213 22:10:13.029265   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:13.029662   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:13.029694   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:13.029634   25284 retry.go:31] will retry after 1.017630181s: waiting for machine to come up
	I0213 22:10:14.049310   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:14.049765   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:14.049784   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:14.049719   25284 retry.go:31] will retry after 1.000111697s: waiting for machine to come up
	I0213 22:10:15.051059   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:15.051389   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:15.051418   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:15.051336   25284 retry.go:31] will retry after 1.749808525s: waiting for machine to come up
	I0213 22:10:16.803212   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:16.803714   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:16.803748   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:16.803651   25284 retry.go:31] will retry after 1.498988773s: waiting for machine to come up
	I0213 22:10:18.304329   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:18.304786   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:18.304839   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:18.304752   25284 retry.go:31] will retry after 2.637427801s: waiting for machine to come up
	I0213 22:10:20.945014   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:20.945451   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:20.945481   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:20.945421   25284 retry.go:31] will retry after 2.336281833s: waiting for machine to come up
	I0213 22:10:23.284950   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:23.285291   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:23.285321   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:23.285249   25284 retry.go:31] will retry after 2.77451976s: waiting for machine to come up
	I0213 22:10:26.063241   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:26.063590   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find current IP address of domain ingress-addon-legacy-741217 in network mk-ingress-addon-legacy-741217
	I0213 22:10:26.063622   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | I0213 22:10:26.063553   25284 retry.go:31] will retry after 4.798735915s: waiting for machine to come up
	I0213 22:10:30.865643   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:30.866101   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Found IP for machine: 192.168.39.71
	I0213 22:10:30.866132   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has current primary IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:30.866139   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Reserving static IP address...
	I0213 22:10:30.866440   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-741217", mac: "52:54:00:99:26:5b", ip: "192.168.39.71"} in network mk-ingress-addon-legacy-741217
	I0213 22:10:30.938873   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Getting to WaitForSSH function...
	I0213 22:10:30.938911   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Reserved static IP address: 192.168.39.71
	I0213 22:10:30.938928   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Waiting for SSH to be available...
	I0213 22:10:30.941828   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:30.942293   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:30.942327   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:30.942445   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Using SSH client type: external
	I0213 22:10:30.942475   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa (-rw-------)
	I0213 22:10:30.942514   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.71 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 22:10:30.942531   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | About to run SSH command:
	I0213 22:10:30.942543   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | exit 0
	I0213 22:10:31.034297   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | SSH cmd err, output: <nil>: 
	I0213 22:10:31.034537   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) KVM machine creation complete!
	I0213 22:10:31.034929   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetConfigRaw
	I0213 22:10:31.035457   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:31.035637   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:31.035828   25250 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 22:10:31.035846   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetState
	I0213 22:10:31.037201   25250 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 22:10:31.037223   25250 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 22:10:31.037235   25250 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 22:10:31.037243   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.039640   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.040042   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.040070   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.040186   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:31.040402   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.040568   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.040714   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:31.040909   25250 main.go:141] libmachine: Using SSH client type: native
	I0213 22:10:31.041264   25250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 22:10:31.041279   25250 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 22:10:31.165210   25250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 22:10:31.165238   25250 main.go:141] libmachine: Detecting the provisioner...
	I0213 22:10:31.165247   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.168145   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.168552   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.168590   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.168796   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:31.169010   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.169184   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.169355   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:31.169529   25250 main.go:141] libmachine: Using SSH client type: native
	I0213 22:10:31.169855   25250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 22:10:31.169932   25250 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 22:10:31.290773   25250 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 22:10:31.290828   25250 main.go:141] libmachine: found compatible host: buildroot
	I0213 22:10:31.290849   25250 main.go:141] libmachine: Provisioning with buildroot...
	I0213 22:10:31.290865   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetMachineName
	I0213 22:10:31.291134   25250 buildroot.go:166] provisioning hostname "ingress-addon-legacy-741217"
	I0213 22:10:31.291159   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetMachineName
	I0213 22:10:31.291310   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.293615   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.293950   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.293980   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.294141   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:31.294301   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.294448   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.294542   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:31.294671   25250 main.go:141] libmachine: Using SSH client type: native
	I0213 22:10:31.294978   25250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 22:10:31.294990   25250 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-741217 && echo "ingress-addon-legacy-741217" | sudo tee /etc/hostname
	I0213 22:10:31.427584   25250 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-741217
	
	I0213 22:10:31.427630   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.430490   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.430834   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.430873   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.431013   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:31.431215   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.431402   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.431504   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:31.431648   25250 main.go:141] libmachine: Using SSH client type: native
	I0213 22:10:31.432016   25250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 22:10:31.432035   25250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-741217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-741217/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-741217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 22:10:31.563468   25250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 22:10:31.563496   25250 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 22:10:31.563524   25250 buildroot.go:174] setting up certificates
	I0213 22:10:31.563534   25250 provision.go:83] configureAuth start
	I0213 22:10:31.563546   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetMachineName
	I0213 22:10:31.563873   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetIP
	I0213 22:10:31.566478   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.566815   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.566843   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.566952   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.569354   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.569717   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.569756   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.569976   25250 provision.go:138] copyHostCerts
	I0213 22:10:31.570014   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:10:31.570061   25250 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 22:10:31.570092   25250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:10:31.570186   25250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 22:10:31.570336   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:10:31.570369   25250 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 22:10:31.570377   25250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:10:31.570427   25250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 22:10:31.570498   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:10:31.570527   25250 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 22:10:31.570537   25250 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:10:31.570582   25250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 22:10:31.570660   25250 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-741217 san=[192.168.39.71 192.168.39.71 localhost 127.0.0.1 minikube ingress-addon-legacy-741217]
	I0213 22:10:31.670490   25250 provision.go:172] copyRemoteCerts
	I0213 22:10:31.670586   25250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 22:10:31.670616   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.673221   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.673609   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.673640   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.673822   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:31.674046   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.674224   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:31.674354   25250 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa Username:docker}
	I0213 22:10:31.762791   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 22:10:31.762876   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 22:10:31.786290   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 22:10:31.786364   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0213 22:10:31.808488   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 22:10:31.808569   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 22:10:31.831939   25250 provision.go:86] duration metric: configureAuth took 268.391852ms
	I0213 22:10:31.831969   25250 buildroot.go:189] setting minikube options for container-runtime
	I0213 22:10:31.832144   25250 config.go:182] Loaded profile config "ingress-addon-legacy-741217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0213 22:10:31.832212   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:31.834805   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.835134   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:31.835164   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:31.835329   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:31.835534   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.835712   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:31.835826   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:31.835975   25250 main.go:141] libmachine: Using SSH client type: native
	I0213 22:10:31.836290   25250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 22:10:31.836305   25250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 22:10:32.157992   25250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 22:10:32.158015   25250 main.go:141] libmachine: Checking connection to Docker...
	I0213 22:10:32.158028   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetURL
	I0213 22:10:32.159287   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Using libvirt version 6000000
	I0213 22:10:32.161549   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.161928   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.161953   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.162114   25250 main.go:141] libmachine: Docker is up and running!
	I0213 22:10:32.162128   25250 main.go:141] libmachine: Reticulating splines...
	I0213 22:10:32.162134   25250 client.go:171] LocalClient.Create took 23.602704437s
	I0213 22:10:32.162153   25250 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-741217" took 23.602777801s
	I0213 22:10:32.162161   25250 start.go:300] post-start starting for "ingress-addon-legacy-741217" (driver="kvm2")
	I0213 22:10:32.162170   25250 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 22:10:32.162194   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:32.162440   25250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 22:10:32.162471   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:32.164757   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.165125   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.165158   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.165269   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:32.165472   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:32.165647   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:32.165792   25250 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa Username:docker}
	I0213 22:10:32.255145   25250 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 22:10:32.259272   25250 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 22:10:32.259299   25250 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 22:10:32.259369   25250 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 22:10:32.259440   25250 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 22:10:32.259450   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /etc/ssl/certs/162002.pem
	I0213 22:10:32.259535   25250 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 22:10:32.268163   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:10:32.290859   25250 start.go:303] post-start completed in 128.685825ms
	I0213 22:10:32.290904   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetConfigRaw
	I0213 22:10:32.291446   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetIP
	I0213 22:10:32.294118   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.294450   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.294485   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.294661   25250 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/config.json ...
	I0213 22:10:32.294835   25250 start.go:128] duration metric: createHost completed in 23.753984115s
	I0213 22:10:32.294857   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:32.297294   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.297643   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.297676   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.297792   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:32.298006   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:32.298173   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:32.298315   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:32.298488   25250 main.go:141] libmachine: Using SSH client type: native
	I0213 22:10:32.298809   25250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.71 22 <nil> <nil>}
	I0213 22:10:32.298823   25250 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 22:10:32.422809   25250 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707862232.388974109
	
	I0213 22:10:32.422832   25250 fix.go:206] guest clock: 1707862232.388974109
	I0213 22:10:32.422840   25250 fix.go:219] Guest: 2024-02-13 22:10:32.388974109 +0000 UTC Remote: 2024-02-13 22:10:32.294846773 +0000 UTC m=+29.813294268 (delta=94.127336ms)
	I0213 22:10:32.422877   25250 fix.go:190] guest clock delta is within tolerance: 94.127336ms
	I0213 22:10:32.422882   25250 start.go:83] releasing machines lock for "ingress-addon-legacy-741217", held for 23.882123772s
	I0213 22:10:32.422904   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:32.423144   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetIP
	I0213 22:10:32.425522   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.425858   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.425901   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.426032   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:32.426518   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:32.426677   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:10:32.426762   25250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 22:10:32.426808   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:32.426871   25250 ssh_runner.go:195] Run: cat /version.json
	I0213 22:10:32.426893   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:10:32.429372   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.429626   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.429692   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.429720   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.429886   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:32.429994   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:32.430015   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:32.430069   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:32.430173   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:10:32.430244   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:32.430293   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:10:32.430392   25250 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa Username:docker}
	I0213 22:10:32.430418   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:10:32.430543   25250 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa Username:docker}
	I0213 22:10:32.514540   25250 ssh_runner.go:195] Run: systemctl --version
	I0213 22:10:32.541152   25250 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 22:10:32.697066   25250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 22:10:32.703098   25250 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 22:10:32.703175   25250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 22:10:32.717293   25250 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 22:10:32.717323   25250 start.go:475] detecting cgroup driver to use...
	I0213 22:10:32.717403   25250 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 22:10:32.730850   25250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 22:10:32.743460   25250 docker.go:217] disabling cri-docker service (if available) ...
	I0213 22:10:32.743522   25250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 22:10:32.755896   25250 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 22:10:32.768720   25250 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 22:10:32.871024   25250 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 22:10:32.982131   25250 docker.go:233] disabling docker service ...
	I0213 22:10:32.982204   25250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 22:10:32.994819   25250 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 22:10:33.007105   25250 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 22:10:33.110105   25250 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 22:10:33.210849   25250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 22:10:33.222808   25250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 22:10:33.239665   25250 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0213 22:10:33.239720   25250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:10:33.248800   25250 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 22:10:33.248868   25250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:10:33.257922   25250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:10:33.266878   25250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:10:33.275755   25250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 22:10:33.284847   25250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 22:10:33.292527   25250 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 22:10:33.292586   25250 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 22:10:33.304862   25250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 22:10:33.313499   25250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 22:10:33.410846   25250 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 22:10:33.571573   25250 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 22:10:33.571633   25250 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 22:10:33.576147   25250 start.go:543] Will wait 60s for crictl version
	I0213 22:10:33.576188   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:33.579988   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 22:10:33.620565   25250 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 22:10:33.620670   25250 ssh_runner.go:195] Run: crio --version
	I0213 22:10:33.668533   25250 ssh_runner.go:195] Run: crio --version
	I0213 22:10:33.720834   25250 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0213 22:10:33.722030   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetIP
	I0213 22:10:33.724798   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:33.725171   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:10:33.725204   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:10:33.725396   25250 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 22:10:33.729520   25250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 22:10:33.741909   25250 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0213 22:10:33.741966   25250 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 22:10:33.777511   25250 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0213 22:10:33.777579   25250 ssh_runner.go:195] Run: which lz4
	I0213 22:10:33.781375   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0213 22:10:33.781493   25250 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 22:10:33.785685   25250 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 22:10:33.785715   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0213 22:10:35.760624   25250 crio.go:444] Took 1.979172 seconds to copy over tarball
	I0213 22:10:35.760725   25250 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 22:10:39.199918   25250 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.439156626s)
	I0213 22:10:39.199943   25250 crio.go:451] Took 3.439287 seconds to extract the tarball
	I0213 22:10:39.199951   25250 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 22:10:39.244971   25250 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 22:10:39.301325   25250 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0213 22:10:39.301355   25250 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 22:10:39.301402   25250 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 22:10:39.301442   25250 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 22:10:39.301464   25250 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 22:10:39.301481   25250 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0213 22:10:39.301440   25250 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 22:10:39.301465   25250 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 22:10:39.301616   25250 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0213 22:10:39.301687   25250 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0213 22:10:39.302789   25250 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 22:10:39.302803   25250 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 22:10:39.302792   25250 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 22:10:39.302825   25250 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0213 22:10:39.302793   25250 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 22:10:39.302844   25250 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 22:10:39.302856   25250 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0213 22:10:39.302928   25250 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0213 22:10:39.497812   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0213 22:10:39.500950   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0213 22:10:39.507368   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0213 22:10:39.508984   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0213 22:10:39.523942   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 22:10:39.531074   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0213 22:10:39.542911   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0213 22:10:39.555121   25250 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0213 22:10:39.555164   25250 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0213 22:10:39.555221   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.619515   25250 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 22:10:39.647719   25250 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0213 22:10:39.647757   25250 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0213 22:10:39.647805   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.682040   25250 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0213 22:10:39.682083   25250 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0213 22:10:39.682134   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.698375   25250 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0213 22:10:39.698413   25250 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0213 22:10:39.698475   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.711974   25250 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0213 22:10:39.712015   25250 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0213 22:10:39.712034   25250 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 22:10:39.712040   25250 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0213 22:10:39.712081   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.712084   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.715780   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0213 22:10:39.715934   25250 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0213 22:10:39.715965   25250 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0213 22:10:39.715997   25250 ssh_runner.go:195] Run: which crictl
	I0213 22:10:39.834441   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0213 22:10:39.834464   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0213 22:10:39.834485   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0213 22:10:39.834441   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0213 22:10:39.834541   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0213 22:10:39.834618   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0213 22:10:39.834659   25250 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0213 22:10:39.960421   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0213 22:10:39.960572   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0213 22:10:39.973441   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0213 22:10:39.973489   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0213 22:10:39.978493   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0213 22:10:39.978649   25250 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0213 22:10:39.978706   25250 cache_images.go:92] LoadImages completed in 677.335481ms
	W0213 22:10:39.978775   25250 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0213 22:10:39.978840   25250 ssh_runner.go:195] Run: crio config
	I0213 22:10:40.041259   25250 cni.go:84] Creating CNI manager for ""
	I0213 22:10:40.041285   25250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 22:10:40.041306   25250 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 22:10:40.041330   25250 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.71 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-741217 NodeName:ingress-addon-legacy-741217 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.71"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.71 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 22:10:40.041496   25250 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.71
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-741217"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.71
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.71"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 22:10:40.041592   25250 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-741217 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.71
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-741217 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 22:10:40.041668   25250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0213 22:10:40.051011   25250 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 22:10:40.051070   25250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 22:10:40.060647   25250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0213 22:10:40.077801   25250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0213 22:10:40.093743   25250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0213 22:10:40.109923   25250 ssh_runner.go:195] Run: grep 192.168.39.71	control-plane.minikube.internal$ /etc/hosts
	I0213 22:10:40.113740   25250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.71	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 22:10:40.125584   25250 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217 for IP: 192.168.39.71
	I0213 22:10:40.125645   25250 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.125816   25250 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 22:10:40.125857   25250 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 22:10:40.125956   25250 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.key
	I0213 22:10:40.125979   25250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt with IP's: []
	I0213 22:10:40.283651   25250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt ...
	I0213 22:10:40.283686   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: {Name:mkc64b744788d02f31c387980cc05a4298bc0a24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.283872   25250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.key ...
	I0213 22:10:40.283885   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.key: {Name:mk698297ebc62616fab748702c2ccd2277953461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.283963   25250 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key.f4667c0f
	I0213 22:10:40.283979   25250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt.f4667c0f with IP's: [192.168.39.71 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 22:10:40.385991   25250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt.f4667c0f ...
	I0213 22:10:40.386022   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt.f4667c0f: {Name:mk7dfa26462f21f056ff14ecf5bce63ea4fbff07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.386168   25250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key.f4667c0f ...
	I0213 22:10:40.386181   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key.f4667c0f: {Name:mkb5c9eaa9402e994c29077088321804bb57136a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.386243   25250 certs.go:337] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt.f4667c0f -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt
	I0213 22:10:40.386322   25250 certs.go:341] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key.f4667c0f -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key
	I0213 22:10:40.386380   25250 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.key
	I0213 22:10:40.386398   25250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.crt with IP's: []
	I0213 22:10:40.569740   25250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.crt ...
	I0213 22:10:40.569769   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.crt: {Name:mk0a4d0684ace2867a2e3d5a71e4978600e5161e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.569947   25250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.key ...
	I0213 22:10:40.569967   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.key: {Name:mk7ae118d8d6251eb564f5791b709cfbe7e20f44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:10:40.570048   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0213 22:10:40.570067   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0213 22:10:40.570076   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0213 22:10:40.570092   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0213 22:10:40.570104   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 22:10:40.570114   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 22:10:40.570127   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 22:10:40.570137   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 22:10:40.570186   25250 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 22:10:40.570224   25250 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 22:10:40.570235   25250 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 22:10:40.570262   25250 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 22:10:40.570288   25250 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 22:10:40.570310   25250 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 22:10:40.570349   25250 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:10:40.570373   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:10:40.570386   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem -> /usr/share/ca-certificates/16200.pem
	I0213 22:10:40.570401   25250 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /usr/share/ca-certificates/162002.pem
	I0213 22:10:40.571014   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 22:10:40.594682   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 22:10:40.618545   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 22:10:40.640867   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 22:10:40.664390   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 22:10:40.689191   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 22:10:40.712111   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 22:10:40.735175   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 22:10:40.757805   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 22:10:40.782760   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 22:10:40.807275   25250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 22:10:40.831300   25250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 22:10:40.849159   25250 ssh_runner.go:195] Run: openssl version
	I0213 22:10:40.855142   25250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 22:10:40.866910   25250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:10:40.871660   25250 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:10:40.871722   25250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:10:40.877240   25250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 22:10:40.889250   25250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 22:10:40.901332   25250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 22:10:40.906345   25250 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:10:40.906409   25250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 22:10:40.912116   25250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 22:10:40.924579   25250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 22:10:40.937058   25250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 22:10:40.942306   25250 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:10:40.942362   25250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 22:10:40.947990   25250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 22:10:40.958946   25250 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 22:10:40.963293   25250 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 22:10:40.963341   25250 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-741217 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-741217 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:10:40.963408   25250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 22:10:40.963457   25250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 22:10:41.002792   25250 cri.go:89] found id: ""
	I0213 22:10:41.002873   25250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 22:10:41.013213   25250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 22:10:41.023080   25250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 22:10:41.033168   25250 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 22:10:41.033212   25250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0213 22:10:41.090190   25250 kubeadm.go:322] W0213 22:10:41.068661     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0213 22:10:41.229109   25250 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 22:10:43.707448   25250 kubeadm.go:322] W0213 22:10:43.688781     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 22:10:43.709205   25250 kubeadm.go:322] W0213 22:10:43.690372     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0213 22:10:54.803611   25250 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0213 22:10:54.803688   25250 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 22:10:54.803782   25250 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 22:10:54.803916   25250 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 22:10:54.804060   25250 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 22:10:54.804220   25250 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 22:10:54.804381   25250 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 22:10:54.804446   25250 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 22:10:54.804540   25250 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 22:10:54.805819   25250 out.go:204]   - Generating certificates and keys ...
	I0213 22:10:54.805932   25250 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 22:10:54.806036   25250 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 22:10:54.806126   25250 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 22:10:54.806195   25250 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 22:10:54.806279   25250 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 22:10:54.806365   25250 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 22:10:54.806446   25250 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 22:10:54.806633   25250 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-741217 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I0213 22:10:54.806713   25250 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 22:10:54.806884   25250 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-741217 localhost] and IPs [192.168.39.71 127.0.0.1 ::1]
	I0213 22:10:54.806978   25250 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 22:10:54.807082   25250 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 22:10:54.807171   25250 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 22:10:54.807247   25250 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 22:10:54.807320   25250 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 22:10:54.807392   25250 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 22:10:54.807478   25250 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 22:10:54.807565   25250 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 22:10:54.807660   25250 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 22:10:54.809147   25250 out.go:204]   - Booting up control plane ...
	I0213 22:10:54.809277   25250 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 22:10:54.809379   25250 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 22:10:54.809453   25250 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 22:10:54.809547   25250 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 22:10:54.809716   25250 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 22:10:54.809848   25250 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503185 seconds
	I0213 22:10:54.810015   25250 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 22:10:54.810186   25250 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 22:10:54.810268   25250 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 22:10:54.810456   25250 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-741217 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0213 22:10:54.810554   25250 kubeadm.go:322] [bootstrap-token] Using token: fs6u48.0bv2ka9yi315neoo
	I0213 22:10:54.812569   25250 out.go:204]   - Configuring RBAC rules ...
	I0213 22:10:54.812695   25250 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 22:10:54.812813   25250 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 22:10:54.812961   25250 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 22:10:54.813106   25250 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 22:10:54.813237   25250 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 22:10:54.813346   25250 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 22:10:54.813492   25250 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 22:10:54.813558   25250 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 22:10:54.813623   25250 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 22:10:54.813635   25250 kubeadm.go:322] 
	I0213 22:10:54.813730   25250 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 22:10:54.813739   25250 kubeadm.go:322] 
	I0213 22:10:54.813848   25250 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 22:10:54.813885   25250 kubeadm.go:322] 
	I0213 22:10:54.813918   25250 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 22:10:54.814029   25250 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 22:10:54.814095   25250 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 22:10:54.814105   25250 kubeadm.go:322] 
	I0213 22:10:54.814181   25250 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 22:10:54.814320   25250 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 22:10:54.814419   25250 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 22:10:54.814429   25250 kubeadm.go:322] 
	I0213 22:10:54.814544   25250 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 22:10:54.814653   25250 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 22:10:54.814681   25250 kubeadm.go:322] 
	I0213 22:10:54.814783   25250 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fs6u48.0bv2ka9yi315neoo \
	I0213 22:10:54.814924   25250 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 22:10:54.814967   25250 kubeadm.go:322]     --control-plane 
	I0213 22:10:54.814980   25250 kubeadm.go:322] 
	I0213 22:10:54.815129   25250 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 22:10:54.815164   25250 kubeadm.go:322] 
	I0213 22:10:54.815280   25250 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fs6u48.0bv2ka9yi315neoo \
	I0213 22:10:54.815442   25250 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 22:10:54.815459   25250 cni.go:84] Creating CNI manager for ""
	I0213 22:10:54.815471   25250 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 22:10:54.817052   25250 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 22:10:54.818372   25250 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 22:10:54.846214   25250 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 22:10:54.868689   25250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 22:10:54.868777   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=ingress-addon-legacy-741217 minikube.k8s.io/updated_at=2024_02_13T22_10_54_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:54.868777   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:54.882956   25250 ops.go:34] apiserver oom_adj: -16
	I0213 22:10:55.320589   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:55.820903   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:56.321317   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:56.820636   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:57.321507   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:57.821386   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:58.320827   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:58.821394   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:59.321423   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:10:59.821573   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:00.321042   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:00.821510   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:01.321576   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:01.820798   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:02.321570   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:02.821117   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:03.321207   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:03.821642   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:04.321260   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:04.821072   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:05.321360   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:05.821525   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:06.321357   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:06.820710   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:07.320775   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:07.820847   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:08.321310   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:08.821000   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:09.320635   25250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:11:09.919067   25250 kubeadm.go:1088] duration metric: took 15.050375031s to wait for elevateKubeSystemPrivileges.
	I0213 22:11:09.919103   25250 kubeadm.go:406] StartCluster complete in 28.955762528s
	I0213 22:11:09.919132   25250 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:11:09.919239   25250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:11:09.919939   25250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:11:09.920183   25250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 22:11:09.920323   25250 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 22:11:09.920428   25250 config.go:182] Loaded profile config "ingress-addon-legacy-741217": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0213 22:11:09.920444   25250 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-741217"
	I0213 22:11:09.920425   25250 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-741217"
	I0213 22:11:09.920474   25250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-741217"
	I0213 22:11:09.920480   25250 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-741217"
	I0213 22:11:09.920549   25250 host.go:66] Checking if "ingress-addon-legacy-741217" exists ...
	I0213 22:11:09.920905   25250 kapi.go:59] client config for ingress-addon-legacy-741217: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:11:09.921003   25250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:11:09.921020   25250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:11:09.921043   25250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:11:09.921043   25250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:11:09.921638   25250 cert_rotation.go:137] Starting client certificate rotation controller
	I0213 22:11:09.935452   25250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0213 22:11:09.935868   25250 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:11:09.936391   25250 main.go:141] libmachine: Using API Version  1
	I0213 22:11:09.936426   25250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:11:09.936771   25250 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:11:09.937271   25250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:11:09.937320   25250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:11:09.939922   25250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0213 22:11:09.940342   25250 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:11:09.940816   25250 main.go:141] libmachine: Using API Version  1
	I0213 22:11:09.940843   25250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:11:09.941246   25250 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:11:09.941453   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetState
	I0213 22:11:09.943770   25250 kapi.go:59] client config for ingress-addon-legacy-741217: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:11:09.944122   25250 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-741217"
	I0213 22:11:09.944162   25250 host.go:66] Checking if "ingress-addon-legacy-741217" exists ...
	I0213 22:11:09.944594   25250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:11:09.944648   25250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:11:09.952809   25250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0213 22:11:09.953254   25250 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:11:09.953797   25250 main.go:141] libmachine: Using API Version  1
	I0213 22:11:09.953825   25250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:11:09.954162   25250 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:11:09.954375   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetState
	I0213 22:11:09.955960   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:11:09.958014   25250 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 22:11:09.959533   25250 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 22:11:09.959561   25250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 22:11:09.959582   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:11:09.961173   25250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37027
	I0213 22:11:09.961648   25250 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:11:09.962218   25250 main.go:141] libmachine: Using API Version  1
	I0213 22:11:09.962248   25250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:11:09.962605   25250 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:11:09.962930   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:11:09.963159   25250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:11:09.963206   25250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:11:09.963349   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:11:09.963378   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:11:09.963640   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:11:09.963859   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:11:09.964062   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:11:09.964216   25250 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa Username:docker}
	I0213 22:11:09.980820   25250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39955
	I0213 22:11:09.981291   25250 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:11:09.981795   25250 main.go:141] libmachine: Using API Version  1
	I0213 22:11:09.981823   25250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:11:09.982172   25250 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:11:09.982366   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetState
	I0213 22:11:09.984090   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .DriverName
	I0213 22:11:09.984360   25250 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 22:11:09.984377   25250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 22:11:09.984395   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHHostname
	I0213 22:11:09.987116   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:11:09.987511   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:26:5b", ip: ""} in network mk-ingress-addon-legacy-741217: {Iface:virbr1 ExpiryTime:2024-02-13 23:10:24 +0000 UTC Type:0 Mac:52:54:00:99:26:5b Iaid: IPaddr:192.168.39.71 Prefix:24 Hostname:ingress-addon-legacy-741217 Clientid:01:52:54:00:99:26:5b}
	I0213 22:11:09.987542   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | domain ingress-addon-legacy-741217 has defined IP address 192.168.39.71 and MAC address 52:54:00:99:26:5b in network mk-ingress-addon-legacy-741217
	I0213 22:11:09.987660   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHPort
	I0213 22:11:09.987865   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHKeyPath
	I0213 22:11:09.988022   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .GetSSHUsername
	I0213 22:11:09.988177   25250 sshutil.go:53] new ssh client: &{IP:192.168.39.71 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/ingress-addon-legacy-741217/id_rsa Username:docker}
	I0213 22:11:10.124783   25250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 22:11:10.136620   25250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 22:11:10.161880   25250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 22:11:10.647098   25250 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-741217" context rescaled to 1 replicas
	I0213 22:11:10.647141   25250 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.71 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 22:11:10.648926   25250 out.go:177] * Verifying Kubernetes components...
	I0213 22:11:10.650287   25250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:11:10.797619   25250 main.go:141] libmachine: Making call to close driver server
	I0213 22:11:10.797660   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .Close
	I0213 22:11:10.797687   25250 main.go:141] libmachine: Making call to close driver server
	I0213 22:11:10.797633   25250 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 22:11:10.797721   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .Close
	I0213 22:11:10.798036   25250 main.go:141] libmachine: Successfully made call to close driver server
	I0213 22:11:10.798065   25250 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 22:11:10.798075   25250 main.go:141] libmachine: Making call to close driver server
	I0213 22:11:10.798084   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .Close
	I0213 22:11:10.798036   25250 main.go:141] libmachine: Successfully made call to close driver server
	I0213 22:11:10.798117   25250 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 22:11:10.798126   25250 main.go:141] libmachine: Making call to close driver server
	I0213 22:11:10.798135   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .Close
	I0213 22:11:10.798142   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Closing plugin on server side
	I0213 22:11:10.798379   25250 main.go:141] libmachine: Successfully made call to close driver server
	I0213 22:11:10.798399   25250 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 22:11:10.798453   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Closing plugin on server side
	I0213 22:11:10.798477   25250 main.go:141] libmachine: Successfully made call to close driver server
	I0213 22:11:10.798485   25250 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 22:11:10.798673   25250 kapi.go:59] client config for ingress-addon-legacy-741217: &rest.Config{Host:"https://192.168.39.71:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:11:10.799130   25250 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-741217" to be "Ready" ...
	I0213 22:11:10.811907   25250 node_ready.go:49] node "ingress-addon-legacy-741217" has status "Ready":"True"
	I0213 22:11:10.811941   25250 node_ready.go:38] duration metric: took 12.768161ms waiting for node "ingress-addon-legacy-741217" to be "Ready" ...
	I0213 22:11:10.811955   25250 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:11:10.817389   25250 main.go:141] libmachine: Making call to close driver server
	I0213 22:11:10.817419   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) Calling .Close
	I0213 22:11:10.817780   25250 main.go:141] libmachine: Successfully made call to close driver server
	I0213 22:11:10.817800   25250 main.go:141] libmachine: (ingress-addon-legacy-741217) DBG | Closing plugin on server side
	I0213 22:11:10.817803   25250 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 22:11:10.819583   25250 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0213 22:11:10.821139   25250 addons.go:505] enable addons completed in 900.817659ms: enabled=[storage-provisioner default-storageclass]
	I0213 22:11:10.832362   25250 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-hjljs" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:12.839450   25250 pod_ready.go:102] pod "coredns-66bff467f8-hjljs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:14.839494   25250 pod_ready.go:102] pod "coredns-66bff467f8-hjljs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:15.335913   25250 pod_ready.go:97] error getting pod "coredns-66bff467f8-hjljs" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-hjljs" not found
	I0213 22:11:15.335945   25250 pod_ready.go:81] duration metric: took 4.503555012s waiting for pod "coredns-66bff467f8-hjljs" in "kube-system" namespace to be "Ready" ...
	E0213 22:11:15.335958   25250 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-hjljs" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-hjljs" not found
	I0213 22:11:15.335972   25250 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:17.343971   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:19.844259   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:22.343624   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:24.343862   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:26.345037   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:28.843931   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:31.342913   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:33.344052   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:35.843995   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:38.343045   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:40.344094   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:42.353944   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:44.845969   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:47.344779   25250 pod_ready.go:102] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"False"
	I0213 22:11:48.351551   25250 pod_ready.go:92] pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace has status "Ready":"True"
	I0213 22:11:48.351573   25250 pod_ready.go:81] duration metric: took 33.015593385s waiting for pod "coredns-66bff467f8-zsjjs" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.351586   25250 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.358956   25250 pod_ready.go:92] pod "etcd-ingress-addon-legacy-741217" in "kube-system" namespace has status "Ready":"True"
	I0213 22:11:48.358976   25250 pod_ready.go:81] duration metric: took 7.383742ms waiting for pod "etcd-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.358984   25250 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.364596   25250 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-741217" in "kube-system" namespace has status "Ready":"True"
	I0213 22:11:48.364613   25250 pod_ready.go:81] duration metric: took 5.622885ms waiting for pod "kube-apiserver-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.364621   25250 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.369912   25250 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-741217" in "kube-system" namespace has status "Ready":"True"
	I0213 22:11:48.369931   25250 pod_ready.go:81] duration metric: took 5.304089ms waiting for pod "kube-controller-manager-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.369945   25250 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4t7xf" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.375204   25250 pod_ready.go:92] pod "kube-proxy-4t7xf" in "kube-system" namespace has status "Ready":"True"
	I0213 22:11:48.375226   25250 pod_ready.go:81] duration metric: took 5.27305ms waiting for pod "kube-proxy-4t7xf" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.375237   25250 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.536573   25250 request.go:629] Waited for 161.240882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.71:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-741217
	I0213 22:11:48.736820   25250 request.go:629] Waited for 196.649923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.71:8443/api/v1/nodes/ingress-addon-legacy-741217
	I0213 22:11:48.740761   25250 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-741217" in "kube-system" namespace has status "Ready":"True"
	I0213 22:11:48.740793   25250 pod_ready.go:81] duration metric: took 365.547759ms waiting for pod "kube-scheduler-ingress-addon-legacy-741217" in "kube-system" namespace to be "Ready" ...
	I0213 22:11:48.740804   25250 pod_ready.go:38] duration metric: took 37.92883417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:11:48.740817   25250 api_server.go:52] waiting for apiserver process to appear ...
	I0213 22:11:48.740867   25250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:11:48.754804   25250 api_server.go:72] duration metric: took 38.107602919s to wait for apiserver process to appear ...
	I0213 22:11:48.754838   25250 api_server.go:88] waiting for apiserver healthz status ...
	I0213 22:11:48.754859   25250 api_server.go:253] Checking apiserver healthz at https://192.168.39.71:8443/healthz ...
	I0213 22:11:48.760712   25250 api_server.go:279] https://192.168.39.71:8443/healthz returned 200:
	ok
	I0213 22:11:48.761859   25250 api_server.go:141] control plane version: v1.18.20
	I0213 22:11:48.761891   25250 api_server.go:131] duration metric: took 7.04563ms to wait for apiserver health ...
	I0213 22:11:48.761901   25250 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 22:11:48.937363   25250 request.go:629] Waited for 175.387163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.71:8443/api/v1/namespaces/kube-system/pods
	I0213 22:11:48.943717   25250 system_pods.go:59] 7 kube-system pods found
	I0213 22:11:48.943747   25250 system_pods.go:61] "coredns-66bff467f8-zsjjs" [8a4ec8fa-7c0a-4b2f-b1d5-8fa93133c281] Running
	I0213 22:11:48.943752   25250 system_pods.go:61] "etcd-ingress-addon-legacy-741217" [e7712da2-32ef-4f1c-ac94-785675549bcb] Running
	I0213 22:11:48.943757   25250 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-741217" [3128f64a-b872-495b-a626-57dcb04973f7] Running
	I0213 22:11:48.943761   25250 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-741217" [e530f15e-1f81-4829-8b5b-ca7e25214ca9] Running
	I0213 22:11:48.943765   25250 system_pods.go:61] "kube-proxy-4t7xf" [bf2232b0-efa6-4c25-80fb-941342d56134] Running
	I0213 22:11:48.943769   25250 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-741217" [b4a0d89c-4b1e-4af4-a3ca-9eff744208c5] Running
	I0213 22:11:48.943772   25250 system_pods.go:61] "storage-provisioner" [22374f85-6acd-4de3-8639-15c40d9403f1] Running
	I0213 22:11:48.943779   25250 system_pods.go:74] duration metric: took 181.871109ms to wait for pod list to return data ...
	I0213 22:11:48.943789   25250 default_sa.go:34] waiting for default service account to be created ...
	I0213 22:11:49.137301   25250 request.go:629] Waited for 193.399859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.71:8443/api/v1/namespaces/default/serviceaccounts
	I0213 22:11:49.140283   25250 default_sa.go:45] found service account: "default"
	I0213 22:11:49.140314   25250 default_sa.go:55] duration metric: took 196.519269ms for default service account to be created ...
	I0213 22:11:49.140328   25250 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 22:11:49.336874   25250 request.go:629] Waited for 196.437718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.71:8443/api/v1/namespaces/kube-system/pods
	I0213 22:11:49.344512   25250 system_pods.go:86] 7 kube-system pods found
	I0213 22:11:49.344539   25250 system_pods.go:89] "coredns-66bff467f8-zsjjs" [8a4ec8fa-7c0a-4b2f-b1d5-8fa93133c281] Running
	I0213 22:11:49.344544   25250 system_pods.go:89] "etcd-ingress-addon-legacy-741217" [e7712da2-32ef-4f1c-ac94-785675549bcb] Running
	I0213 22:11:49.344549   25250 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-741217" [3128f64a-b872-495b-a626-57dcb04973f7] Running
	I0213 22:11:49.344553   25250 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-741217" [e530f15e-1f81-4829-8b5b-ca7e25214ca9] Running
	I0213 22:11:49.344557   25250 system_pods.go:89] "kube-proxy-4t7xf" [bf2232b0-efa6-4c25-80fb-941342d56134] Running
	I0213 22:11:49.344563   25250 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-741217" [b4a0d89c-4b1e-4af4-a3ca-9eff744208c5] Running
	I0213 22:11:49.344567   25250 system_pods.go:89] "storage-provisioner" [22374f85-6acd-4de3-8639-15c40d9403f1] Running
	I0213 22:11:49.344574   25250 system_pods.go:126] duration metric: took 204.240712ms to wait for k8s-apps to be running ...
	I0213 22:11:49.344583   25250 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 22:11:49.344631   25250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:11:49.360987   25250 system_svc.go:56] duration metric: took 16.396226ms WaitForService to wait for kubelet.
	I0213 22:11:49.361019   25250 kubeadm.go:581] duration metric: took 38.713831664s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 22:11:49.361043   25250 node_conditions.go:102] verifying NodePressure condition ...
	I0213 22:11:49.536468   25250 request.go:629] Waited for 175.322624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.71:8443/api/v1/nodes
	I0213 22:11:49.541324   25250 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:11:49.541364   25250 node_conditions.go:123] node cpu capacity is 2
	I0213 22:11:49.541377   25250 node_conditions.go:105] duration metric: took 180.328414ms to run NodePressure ...
	I0213 22:11:49.541391   25250 start.go:228] waiting for startup goroutines ...
	I0213 22:11:49.541400   25250 start.go:233] waiting for cluster config update ...
	I0213 22:11:49.541416   25250 start.go:242] writing updated cluster config ...
	I0213 22:11:49.541671   25250 ssh_runner.go:195] Run: rm -f paused
	I0213 22:11:49.589469   25250 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0213 22:11:49.591205   25250 out.go:177] 
	W0213 22:11:49.592519   25250 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0213 22:11:49.593897   25250 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0213 22:11:49.595252   25250 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-741217" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 22:10:21 UTC, ends at Tue 2024-02-13 22:14:53 UTC. --
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.436338720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707862493436322768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203943,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=c0ecb120-fddd-4b06-90ac-4951c62c3d30 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.437009369Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b26633c1-e9f8-4249-82e9-004f26e03cec name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.437366548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b26633c1-e9f8-4249-82e9-004f26e03cec name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.437876462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bf1c17fcafbb258cf957108626ec1e48a03252895a1c473023b54e5901abd6,PodSandboxId:77a8e9e7f1f9f38f67f3361ceb6e0d84d4fe80b49acf0416e85ea3f83176d225,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707862474892868082,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-msfxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38636c2d-a30c-48ad-8f59-9ed6115f51d3,},Annotations:map[string]string{io.kubernetes.container.hash: b61eba3c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:023e6fa2a67968f9458fc282f63cd0a12e09c54373b8f7ba4a1c18cc7db2c588,PodSandboxId:fc94734b973c336ed57a0c09e323af98a2b834395a7145d9448ce02115459abd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707862335012428801,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d14fae32-392d-4e2b-b4b0-41613ce45b7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6578d500,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44bda6ea1ddabc28bf9a6ac3e3cefe407b2a3137d318980c40f689ffea06e28,PodSandboxId:1b8e6e2292ae729cbed72ded4bec3f6105c1ee2815ee1a4a4f35a2f704e37a5e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1707862321861913901,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-h46bb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de89edf9-85ee-4998-b08a-768a0a423022,},Annotations:map[string]string{io.kubernetes.container.hash: 8e55d78c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a423537f2317b5d42146ea8280e13450371e6e8d2cec4de7a4774390cfdc40c8,PodSandboxId:f3ad70107dd80ddab1907ea574febf1a601ac24b5bfc3d241818f7552a039904,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862313070983779,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sqm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d90ce00-2b40-441c-b1a8-f24eed4ad07f,},Annotations:map[string]string{io.kubernetes.container.hash: f0586219,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fc7163f4efd49f04e377bc4579a598cfb8b00698fa15de845328b02e42f1cb,PodSandboxId:b2d26896a0f4c25a6e7a95a7c0e1cc916af42926b994e45e63b6be7ced67f924,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862312923444593,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-l9v9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2322b86b-f095-409d-9c10-317cab776bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 29328279,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32539e2eeae6586689f9d53ab02c1f21743eff78b57e6ba758282e9a04aebf8c,PodSandboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707862302357733806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b0341610a4c2ed431fbf8e24b90da81863880a43e66abaf06df5d42382a3c1,PodSandboxId:2f1f8170ab0ecb6ca0ab3f7e401ede24b6b548093731832b3f80d1db9780fc11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1707862272568041938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t7xf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2232b0-efa6-4c25-80fb-941342d56134,},Annotations:map[string]string{io.kubernetes.container.hash: 7707ebd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bf51b31878da91258feb88c6fa630f89d2cfec46fdeeb683d523a2ae149fc16,PodSandboxId:96a7b7fe4d76d8b1ae10842bba58936c3a0a7917bf0e0916965a7f93fc02d4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1707862272158987213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zsjjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ec8fa-7c0a-4b2f-b1d5-8fa93133c281,},Annotations:map[string]string{io.kubernetes.container.hash: 699d870,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12cbf121e73f9df1e05a5458e240ecea899d4edd9ed8f99869213bd208fb6a8,PodS
andboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707862271515811720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54b675236666c4cfb58f4f18a6610e6cf1f2bfea7f4273b5bbe5c682871feb8,PodSan
dboxId:72afa4dc17e3e38e8818fab1fb80b3d9e98087f003cd7968714662e8b7f054a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1707862247783248492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d54f619c05658c0e459e59b973a6048,},Annotations:map[string]string{io.kubernetes.container.hash: 2e09db85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b3317aa447fc9ff2ef9db385a9be66ffd90b1dac5b285a80fb9e0f3bfe1037,PodSandboxId:bd9527f325c5782a1c66efcd9609a8db7fbbd68
e8da329ab0171b4a18730ba9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1707862246087448474,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d9df674179190ee87bda02afad619b502d254ce8635dde2013e34a006d4f79,PodSandboxId:6ced8ab9d
3fbd3adf77b2fdfa8199992e11fffcd7836402aa297ff49279b579f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1707862246034791195,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950d11004664889847accee96e1576a669b7d8c82d1323c0cfaaf56eb965108a,PodSandboxId:1dbc19233f9aada
dc5196e73b96345d46d17cb507343f508188850c8614a4725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1707862245804355891,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5426eca4443ba76d362d57916867e17,},Annotations:map[string]string{io.kubernetes.container.hash: efbfef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b26633c1-e9f8-4249-82e9-004f26e03cec name=/runtime.v1.RuntimeService
/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.477655973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b6427424-da96-4ec6-a828-d150b8f7979a name=/runtime.v1.RuntimeService/Version
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.477742891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b6427424-da96-4ec6-a828-d150b8f7979a name=/runtime.v1.RuntimeService/Version
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.479527721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=76df671b-90bc-470a-8971-17d42e8c8b88 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.480001538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707862493479988724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203943,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=76df671b-90bc-470a-8971-17d42e8c8b88 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.480704113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fc31ad11-6ff8-43c7-b1c2-c6e4b0497c79 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.480753739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fc31ad11-6ff8-43c7-b1c2-c6e4b0497c79 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.480991670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bf1c17fcafbb258cf957108626ec1e48a03252895a1c473023b54e5901abd6,PodSandboxId:77a8e9e7f1f9f38f67f3361ceb6e0d84d4fe80b49acf0416e85ea3f83176d225,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707862474892868082,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-msfxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38636c2d-a30c-48ad-8f59-9ed6115f51d3,},Annotations:map[string]string{io.kubernetes.container.hash: b61eba3c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:023e6fa2a67968f9458fc282f63cd0a12e09c54373b8f7ba4a1c18cc7db2c588,PodSandboxId:fc94734b973c336ed57a0c09e323af98a2b834395a7145d9448ce02115459abd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707862335012428801,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d14fae32-392d-4e2b-b4b0-41613ce45b7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6578d500,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44bda6ea1ddabc28bf9a6ac3e3cefe407b2a3137d318980c40f689ffea06e28,PodSandboxId:1b8e6e2292ae729cbed72ded4bec3f6105c1ee2815ee1a4a4f35a2f704e37a5e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1707862321861913901,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-h46bb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de89edf9-85ee-4998-b08a-768a0a423022,},Annotations:map[string]string{io.kubernetes.container.hash: 8e55d78c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a423537f2317b5d42146ea8280e13450371e6e8d2cec4de7a4774390cfdc40c8,PodSandboxId:f3ad70107dd80ddab1907ea574febf1a601ac24b5bfc3d241818f7552a039904,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862313070983779,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sqm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d90ce00-2b40-441c-b1a8-f24eed4ad07f,},Annotations:map[string]string{io.kubernetes.container.hash: f0586219,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fc7163f4efd49f04e377bc4579a598cfb8b00698fa15de845328b02e42f1cb,PodSandboxId:b2d26896a0f4c25a6e7a95a7c0e1cc916af42926b994e45e63b6be7ced67f924,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862312923444593,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-l9v9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2322b86b-f095-409d-9c10-317cab776bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 29328279,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32539e2eeae6586689f9d53ab02c1f21743eff78b57e6ba758282e9a04aebf8c,PodSandboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707862302357733806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b0341610a4c2ed431fbf8e24b90da81863880a43e66abaf06df5d42382a3c1,PodSandboxId:2f1f8170ab0ecb6ca0ab3f7e401ede24b6b548093731832b3f80d1db9780fc11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1707862272568041938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t7xf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2232b0-efa6-4c25-80fb-941342d56134,},Annotations:map[string]string{io.kubernetes.container.hash: 7707ebd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bf51b31878da91258feb88c6fa630f89d2cfec46fdeeb683d523a2ae149fc16,PodSandboxId:96a7b7fe4d76d8b1ae10842bba58936c3a0a7917bf0e0916965a7f93fc02d4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1707862272158987213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zsjjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ec8fa-7c0a-4b2f-b1d5-8fa93133c281,},Annotations:map[string]string{io.kubernetes.container.hash: 699d870,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12cbf121e73f9df1e05a5458e240ecea899d4edd9ed8f99869213bd208fb6a8,PodS
andboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707862271515811720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54b675236666c4cfb58f4f18a6610e6cf1f2bfea7f4273b5bbe5c682871feb8,PodSan
dboxId:72afa4dc17e3e38e8818fab1fb80b3d9e98087f003cd7968714662e8b7f054a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1707862247783248492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d54f619c05658c0e459e59b973a6048,},Annotations:map[string]string{io.kubernetes.container.hash: 2e09db85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b3317aa447fc9ff2ef9db385a9be66ffd90b1dac5b285a80fb9e0f3bfe1037,PodSandboxId:bd9527f325c5782a1c66efcd9609a8db7fbbd68
e8da329ab0171b4a18730ba9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1707862246087448474,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d9df674179190ee87bda02afad619b502d254ce8635dde2013e34a006d4f79,PodSandboxId:6ced8ab9d
3fbd3adf77b2fdfa8199992e11fffcd7836402aa297ff49279b579f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1707862246034791195,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950d11004664889847accee96e1576a669b7d8c82d1323c0cfaaf56eb965108a,PodSandboxId:1dbc19233f9aada
dc5196e73b96345d46d17cb507343f508188850c8614a4725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1707862245804355891,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5426eca4443ba76d362d57916867e17,},Annotations:map[string]string{io.kubernetes.container.hash: efbfef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fc31ad11-6ff8-43c7-b1c2-c6e4b0497c79 name=/runtime.v1.RuntimeService
/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.525608633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e0763a01-243e-429c-8edd-c94dfe5b36b2 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.525693113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e0763a01-243e-429c-8edd-c94dfe5b36b2 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.526900950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=185ab60d-07a1-40e0-a542-65dd320171fd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.527376462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707862493527364068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203943,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=185ab60d-07a1-40e0-a542-65dd320171fd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.528585703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3022977a-d786-4284-93e0-54bff82b0cc7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.528686283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3022977a-d786-4284-93e0-54bff82b0cc7 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.529019184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bf1c17fcafbb258cf957108626ec1e48a03252895a1c473023b54e5901abd6,PodSandboxId:77a8e9e7f1f9f38f67f3361ceb6e0d84d4fe80b49acf0416e85ea3f83176d225,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707862474892868082,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-msfxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38636c2d-a30c-48ad-8f59-9ed6115f51d3,},Annotations:map[string]string{io.kubernetes.container.hash: b61eba3c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:023e6fa2a67968f9458fc282f63cd0a12e09c54373b8f7ba4a1c18cc7db2c588,PodSandboxId:fc94734b973c336ed57a0c09e323af98a2b834395a7145d9448ce02115459abd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707862335012428801,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d14fae32-392d-4e2b-b4b0-41613ce45b7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6578d500,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44bda6ea1ddabc28bf9a6ac3e3cefe407b2a3137d318980c40f689ffea06e28,PodSandboxId:1b8e6e2292ae729cbed72ded4bec3f6105c1ee2815ee1a4a4f35a2f704e37a5e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1707862321861913901,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-h46bb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de89edf9-85ee-4998-b08a-768a0a423022,},Annotations:map[string]string{io.kubernetes.container.hash: 8e55d78c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a423537f2317b5d42146ea8280e13450371e6e8d2cec4de7a4774390cfdc40c8,PodSandboxId:f3ad70107dd80ddab1907ea574febf1a601ac24b5bfc3d241818f7552a039904,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862313070983779,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sqm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d90ce00-2b40-441c-b1a8-f24eed4ad07f,},Annotations:map[string]string{io.kubernetes.container.hash: f0586219,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fc7163f4efd49f04e377bc4579a598cfb8b00698fa15de845328b02e42f1cb,PodSandboxId:b2d26896a0f4c25a6e7a95a7c0e1cc916af42926b994e45e63b6be7ced67f924,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862312923444593,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-l9v9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2322b86b-f095-409d-9c10-317cab776bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 29328279,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32539e2eeae6586689f9d53ab02c1f21743eff78b57e6ba758282e9a04aebf8c,PodSandboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707862302357733806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b0341610a4c2ed431fbf8e24b90da81863880a43e66abaf06df5d42382a3c1,PodSandboxId:2f1f8170ab0ecb6ca0ab3f7e401ede24b6b548093731832b3f80d1db9780fc11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1707862272568041938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t7xf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2232b0-efa6-4c25-80fb-941342d56134,},Annotations:map[string]string{io.kubernetes.container.hash: 7707ebd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bf51b31878da91258feb88c6fa630f89d2cfec46fdeeb683d523a2ae149fc16,PodSandboxId:96a7b7fe4d76d8b1ae10842bba58936c3a0a7917bf0e0916965a7f93fc02d4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1707862272158987213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zsjjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ec8fa-7c0a-4b2f-b1d5-8fa93133c281,},Annotations:map[string]string{io.kubernetes.container.hash: 699d870,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12cbf121e73f9df1e05a5458e240ecea899d4edd9ed8f99869213bd208fb6a8,PodS
andboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707862271515811720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54b675236666c4cfb58f4f18a6610e6cf1f2bfea7f4273b5bbe5c682871feb8,PodSan
dboxId:72afa4dc17e3e38e8818fab1fb80b3d9e98087f003cd7968714662e8b7f054a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1707862247783248492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d54f619c05658c0e459e59b973a6048,},Annotations:map[string]string{io.kubernetes.container.hash: 2e09db85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b3317aa447fc9ff2ef9db385a9be66ffd90b1dac5b285a80fb9e0f3bfe1037,PodSandboxId:bd9527f325c5782a1c66efcd9609a8db7fbbd68
e8da329ab0171b4a18730ba9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1707862246087448474,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d9df674179190ee87bda02afad619b502d254ce8635dde2013e34a006d4f79,PodSandboxId:6ced8ab9d
3fbd3adf77b2fdfa8199992e11fffcd7836402aa297ff49279b579f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1707862246034791195,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950d11004664889847accee96e1576a669b7d8c82d1323c0cfaaf56eb965108a,PodSandboxId:1dbc19233f9aada
dc5196e73b96345d46d17cb507343f508188850c8614a4725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1707862245804355891,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5426eca4443ba76d362d57916867e17,},Annotations:map[string]string{io.kubernetes.container.hash: efbfef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3022977a-d786-4284-93e0-54bff82b0cc7 name=/runtime.v1.RuntimeService
/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.571281162Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=51557b08-ae6f-4b69-9ae4-6f64ee68ad69 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.571344354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=51557b08-ae6f-4b69-9ae4-6f64ee68ad69 name=/runtime.v1.RuntimeService/Version
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.573842228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=47fd54cc-fda1-4cad-ad2e-676b190eac60 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.574341004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707862493574326025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203943,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=47fd54cc-fda1-4cad-ad2e-676b190eac60 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.575215301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aef559e6-b134-49fd-bc83-cccf06cf295c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.575269899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aef559e6-b134-49fd-bc83-cccf06cf295c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:14:53 ingress-addon-legacy-741217 crio[716]: time="2024-02-13 22:14:53.575646986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bf1c17fcafbb258cf957108626ec1e48a03252895a1c473023b54e5901abd6,PodSandboxId:77a8e9e7f1f9f38f67f3361ceb6e0d84d4fe80b49acf0416e85ea3f83176d225,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1707862474892868082,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-msfxt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38636c2d-a30c-48ad-8f59-9ed6115f51d3,},Annotations:map[string]string{io.kubernetes.container.hash: b61eba3c,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:023e6fa2a67968f9458fc282f63cd0a12e09c54373b8f7ba4a1c18cc7db2c588,PodSandboxId:fc94734b973c336ed57a0c09e323af98a2b834395a7145d9448ce02115459abd,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027,State:CONTAINER_RUNNING,CreatedAt:1707862335012428801,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d14fae32-392d-4e2b-b4b0-41613ce45b7f,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 6578d500,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a44bda6ea1ddabc28bf9a6ac3e3cefe407b2a3137d318980c40f689ffea06e28,PodSandboxId:1b8e6e2292ae729cbed72ded4bec3f6105c1ee2815ee1a4a4f35a2f704e37a5e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1707862321861913901,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-h46bb,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de89edf9-85ee-4998-b08a-768a0a423022,},Annotations:map[string]string{io.kubernetes.container.hash: 8e55d78c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a423537f2317b5d42146ea8280e13450371e6e8d2cec4de7a4774390cfdc40c8,PodSandboxId:f3ad70107dd80ddab1907ea574febf1a601ac24b5bfc3d241818f7552a039904,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862313070983779,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-8sqm9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5d90ce00-2b40-441c-b1a8-f24eed4ad07f,},Annotations:map[string]string{io.kubernetes.container.hash: f0586219,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75fc7163f4efd49f04e377bc4579a598cfb8b00698fa15de845328b02e42f1cb,PodSandboxId:b2d26896a0f4c25a6e7a95a7c0e1cc916af42926b994e45e63b6be7ced67f924,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1707862312923444593,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-l9v9c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2322b86b-f095-409d-9c10-317cab776bfa,},Annotations:map[string]string{io.kubernetes.container.hash: 29328279,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32539e2eeae6586689f9d53ab02c1f21743eff78b57e6ba758282e9a04aebf8c,PodSandboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707862302357733806,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b0341610a4c2ed431fbf8e24b90da81863880a43e66abaf06df5d42382a3c1,PodSandboxId:2f1f8170ab0ecb6ca0ab3f7e401ede24b6b548093731832b3f80d1db9780fc11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSp
ec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1707862272568041938,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4t7xf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf2232b0-efa6-4c25-80fb-941342d56134,},Annotations:map[string]string{io.kubernetes.container.hash: 7707ebd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bf51b31878da91258feb88c6fa630f89d2cfec46fdeeb683d523a2ae149fc16,PodSandboxId:96a7b7fe4d76d8b1ae10842bba58936c3a0a7917bf0e0916965a7f93fc02d4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b0
0a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1707862272158987213,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-zsjjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ec8fa-7c0a-4b2f-b1d5-8fa93133c281,},Annotations:map[string]string{io.kubernetes.container.hash: 699d870,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a12cbf121e73f9df1e05a5458e240ecea899d4edd9ed8f99869213bd208fb6a8,PodS
andboxId:001a125a9e56d97a5a56c4d06ccc619ecfc5507835719fc3339cc00d780c73f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707862271515811720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22374f85-6acd-4de3-8639-15c40d9403f1,},Annotations:map[string]string{io.kubernetes.container.hash: 8803110b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a54b675236666c4cfb58f4f18a6610e6cf1f2bfea7f4273b5bbe5c682871feb8,PodSan
dboxId:72afa4dc17e3e38e8818fab1fb80b3d9e98087f003cd7968714662e8b7f054a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1707862247783248492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d54f619c05658c0e459e59b973a6048,},Annotations:map[string]string{io.kubernetes.container.hash: 2e09db85,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7b3317aa447fc9ff2ef9db385a9be66ffd90b1dac5b285a80fb9e0f3bfe1037,PodSandboxId:bd9527f325c5782a1c66efcd9609a8db7fbbd68
e8da329ab0171b4a18730ba9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1707862246087448474,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45d9df674179190ee87bda02afad619b502d254ce8635dde2013e34a006d4f79,PodSandboxId:6ced8ab9d
3fbd3adf77b2fdfa8199992e11fffcd7836402aa297ff49279b579f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1707862246034791195,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:950d11004664889847accee96e1576a669b7d8c82d1323c0cfaaf56eb965108a,PodSandboxId:1dbc19233f9aada
dc5196e73b96345d46d17cb507343f508188850c8614a4725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1707862245804355891,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-741217,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5426eca4443ba76d362d57916867e17,},Annotations:map[string]string{io.kubernetes.container.hash: efbfef7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aef559e6-b134-49fd-bc83-cccf06cf295c name=/runtime.v1.RuntimeService
/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17bf1c17fcafb       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            18 seconds ago      Running             hello-world-app           0                   77a8e9e7f1f9f       hello-world-app-5f5d8b66bb-msfxt
	023e6fa2a6796       docker.io/library/nginx@sha256:b1cfc4e0e01b4dceca3265fd4ca97921569fca1a10919639bedfa8dad9127027                    2 minutes ago       Running             nginx                     0                   fc94734b973c3       nginx
	a44bda6ea1dda       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   1b8e6e2292ae7       ingress-nginx-controller-7fcf777cb7-h46bb
	a423537f2317b       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   f3ad70107dd80       ingress-nginx-admission-patch-8sqm9
	75fc7163f4efd       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   b2d26896a0f4c       ingress-nginx-admission-create-l9v9c
	32539e2eeae65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       1                   001a125a9e56d       storage-provisioner
	c4b0341610a4c       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   2f1f8170ab0ec       kube-proxy-4t7xf
	0bf51b31878da       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   96a7b7fe4d76d       coredns-66bff467f8-zsjjs
	a12cbf121e73f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   001a125a9e56d       storage-provisioner
	a54b675236666       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   72afa4dc17e3e       etcd-ingress-addon-legacy-741217
	f7b3317aa447f       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   bd9527f325c57       kube-controller-manager-ingress-addon-legacy-741217
	45d9df6741791       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   6ced8ab9d3fbd       kube-scheduler-ingress-addon-legacy-741217
	950d110046648       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   1dbc19233f9aa       kube-apiserver-ingress-addon-legacy-741217
	
	
	==> coredns [0bf51b31878da91258feb88c6fa630f89d2cfec46fdeeb683d523a2ae149fc16] <==
	[INFO] 10.244.0.6:42823 - 47334 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067949s
	[INFO] 10.244.0.6:57991 - 12256 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074359s
	[INFO] 10.244.0.6:42823 - 20135 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068349s
	[INFO] 10.244.0.6:57991 - 36953 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059246s
	[INFO] 10.244.0.6:42823 - 39519 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006257s
	[INFO] 10.244.0.6:57991 - 47686 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060973s
	[INFO] 10.244.0.6:42823 - 14325 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096618s
	[INFO] 10.244.0.6:57991 - 29145 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061177s
	[INFO] 10.244.0.6:57991 - 51225 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040419s
	[INFO] 10.244.0.6:57991 - 55464 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036205s
	[INFO] 10.244.0.6:57991 - 23512 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000086974s
	[INFO] 10.244.0.6:46363 - 20987 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000075881s
	[INFO] 10.244.0.6:35828 - 50494 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032474s
	[INFO] 10.244.0.6:46363 - 45855 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000092312s
	[INFO] 10.244.0.6:35828 - 25664 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000509126s
	[INFO] 10.244.0.6:35828 - 35760 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056872s
	[INFO] 10.244.0.6:46363 - 40468 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061349s
	[INFO] 10.244.0.6:35828 - 19037 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057546s
	[INFO] 10.244.0.6:46363 - 51098 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000187551s
	[INFO] 10.244.0.6:46363 - 7653 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000282087s
	[INFO] 10.244.0.6:35828 - 39987 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000190009s
	[INFO] 10.244.0.6:46363 - 57188 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096034s
	[INFO] 10.244.0.6:46363 - 15552 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060628s
	[INFO] 10.244.0.6:35828 - 20442 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064112s
	[INFO] 10.244.0.6:35828 - 59288 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062335s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-741217
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-741217
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=ingress-addon-legacy-741217
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T22_10_54_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:10:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-741217
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:14:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:12:25 +0000   Tue, 13 Feb 2024 22:10:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:12:25 +0000   Tue, 13 Feb 2024 22:10:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:12:25 +0000   Tue, 13 Feb 2024 22:10:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:12:25 +0000   Tue, 13 Feb 2024 22:11:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.71
	  Hostname:    ingress-addon-legacy-741217
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 bda391e6512c476db909545b77ee56a0
	  System UUID:                bda391e6-512c-476d-b909-545b77ee56a0
	  Boot ID:                    e4284ed1-0197-46f7-9b31-6aaf6015f729
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-msfxt                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-66bff467f8-zsjjs                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-741217                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-apiserver-ingress-addon-legacy-741217             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-741217    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-proxy-4t7xf                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-741217             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-741217 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x5 over 4m9s)  kubelet     Node ingress-addon-legacy-741217 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x4 over 4m9s)  kubelet     Node ingress-addon-legacy-741217 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m58s                kubelet     Node ingress-addon-legacy-741217 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s                kubelet     Node ingress-addon-legacy-741217 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s                kubelet     Node ingress-addon-legacy-741217 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m48s                kubelet     Node ingress-addon-legacy-741217 status is now: NodeReady
	  Normal  Starting                 3m41s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Feb13 22:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.095583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.476484] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.577477] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139684] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.080327] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.934533] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.101151] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.133379] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.104930] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.198737] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +8.155560] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +2.989151] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.881476] systemd-fstab-generator[1414]: Ignoring "noauto" for root device
	[Feb13 22:11] kauditd_printk_skb: 6 callbacks suppressed
	[ +36.448154] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.016701] kauditd_printk_skb: 6 callbacks suppressed
	[Feb13 22:12] kauditd_printk_skb: 7 callbacks suppressed
	[Feb13 22:14] kauditd_printk_skb: 5 callbacks suppressed
	[ +13.746431] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [a54b675236666c4cfb58f4f18a6610e6cf1f2bfea7f4273b5bbe5c682871feb8] <==
	raft2024/02/13 22:10:47 INFO: 226d7ac4e2309206 switched to configuration voters=(2480773955778023942)
	2024-02-13 22:10:47.896839 W | auth: simple token is not cryptographically signed
	2024-02-13 22:10:47.900811 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-13 22:10:47.902284 I | etcdserver: 226d7ac4e2309206 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/13 22:10:47 INFO: 226d7ac4e2309206 switched to configuration voters=(2480773955778023942)
	2024-02-13 22:10:47.902732 I | etcdserver/membership: added member 226d7ac4e2309206 [https://192.168.39.71:2380] to cluster 98fbf1e9ed6d9a6e
	2024-02-13 22:10:47.904427 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-13 22:10:47.904736 I | embed: listening for peers on 192.168.39.71:2380
	2024-02-13 22:10:47.904802 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/13 22:10:48 INFO: 226d7ac4e2309206 is starting a new election at term 1
	raft2024/02/13 22:10:48 INFO: 226d7ac4e2309206 became candidate at term 2
	raft2024/02/13 22:10:48 INFO: 226d7ac4e2309206 received MsgVoteResp from 226d7ac4e2309206 at term 2
	raft2024/02/13 22:10:48 INFO: 226d7ac4e2309206 became leader at term 2
	raft2024/02/13 22:10:48 INFO: raft.node: 226d7ac4e2309206 elected leader 226d7ac4e2309206 at term 2
	2024-02-13 22:10:48.390232 I | etcdserver: published {Name:ingress-addon-legacy-741217 ClientURLs:[https://192.168.39.71:2379]} to cluster 98fbf1e9ed6d9a6e
	2024-02-13 22:10:48.390403 I | embed: ready to serve client requests
	2024-02-13 22:10:48.391706 I | embed: ready to serve client requests
	2024-02-13 22:10:48.391857 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-13 22:10:48.392609 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-13 22:10:48.392754 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-13 22:10:48.392999 I | embed: serving client requests on 192.168.39.71:2379
	2024-02-13 22:10:48.394417 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-13 22:11:09.900824 W | etcdserver: request "header:<ID:10522253317156492362 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-5bdc57b48f\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-5bdc57b48f\" value_size:2204 >> failure:<>>" with result "size:16" took too long (493.551977ms) to execute
	2024-02-13 22:11:09.901443 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:209" took too long (443.681598ms) to execute
	2024-02-13 22:11:59.201041 W | etcdserver: read-only range request "key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" " with result "range_response_count:3 size:13723" took too long (138.954197ms) to execute
	
	
	==> kernel <==
	 22:14:53 up 4 min,  0 users,  load average: 0.29, 0.32, 0.16
	Linux ingress-addon-legacy-741217 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [950d11004664889847accee96e1576a669b7d8c82d1323c0cfaaf56eb965108a] <==
	I0213 22:10:51.494593       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 22:10:51.511091       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0213 22:10:51.511158       1 cache.go:39] Caches are synced for autoregister controller
	I0213 22:10:51.511366       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0213 22:10:51.594995       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 22:10:52.392233       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0213 22:10:52.392336       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0213 22:10:52.399382       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0213 22:10:52.405202       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0213 22:10:52.405305       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0213 22:10:52.903403       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 22:10:52.951380       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0213 22:10:53.042246       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.71]
	I0213 22:10:53.043212       1 controller.go:609] quota admission added evaluator for: endpoints
	I0213 22:10:53.046834       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 22:10:53.755924       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0213 22:10:54.564753       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0213 22:10:54.755683       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0213 22:10:55.253839       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 22:11:09.405469       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0213 22:11:09.551262       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0213 22:11:09.904831       1 trace.go:116] Trace[2025603382]: "Create" url:/apis/apps/v1/namespaces/kube-system/controllerrevisions,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/system:serviceaccount:kube-system:daemon-set-controller,client:192.168.39.71 (started: 2024-02-13 22:11:09.404301633 +0000 UTC m=+23.468165825) (total time: 500.498413ms):
	Trace[2025603382]: [500.458415ms] [500.39778ms] Object stored in database
	I0213 22:11:50.430165       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0213 22:12:11.877163       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [f7b3317aa447fc9ff2ef9db385a9be66ffd90b1dac5b285a80fb9e0f3bfe1037] <==
	I0213 22:11:09.699391       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-741217", UID:"d33eb535-658f-4b86-af97-06522e487c4c", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-741217 event: Registered Node ingress-addon-legacy-741217 in Controller
	I0213 22:11:09.699451       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0213 22:11:09.715960       1 shared_informer.go:230] Caches are synced for resource quota 
	I0213 22:11:09.749075       1 shared_informer.go:230] Caches are synced for disruption 
	I0213 22:11:09.749135       1 disruption.go:339] Sending events to api server.
	I0213 22:11:09.749221       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0213 22:11:09.749253       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0213 22:11:09.793450       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0213 22:11:09.799389       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0213 22:11:09.799576       1 shared_informer.go:230] Caches are synced for resource quota 
	I0213 22:11:09.906997       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3541f4b1-8c8e-4c10-aed2-3ebc07131161", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0213 22:11:09.942247       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"16f647d4-e52c-450a-9e64-562381eeff91", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-hjljs
	I0213 22:11:09.944585       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"06a04c23-35af-4b84-a2f3-fd01c34c136a", APIVersion:"apps/v1", ResourceVersion:"214", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-4t7xf
	I0213 22:11:09.981678       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"16f647d4-e52c-450a-9e64-562381eeff91", APIVersion:"apps/v1", ResourceVersion:"325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-zsjjs
	I0213 22:11:10.145994       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3541f4b1-8c8e-4c10-aed2-3ebc07131161", APIVersion:"apps/v1", ResourceVersion:"338", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0213 22:11:10.226644       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"16f647d4-e52c-450a-9e64-562381eeff91", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-hjljs
	I0213 22:11:50.391615       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"85d1b955-081d-40c5-ae1a-2577f0b0b5f8", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0213 22:11:50.422994       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"8e87bc0c-4988-40b4-aa37-16aab0007223", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-h46bb
	I0213 22:11:50.464377       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3fe39de2-42de-49e8-a7f0-5128e694cec3", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-l9v9c
	I0213 22:11:50.540937       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"bde81ecf-8d5f-4254-a039-056b1a8769a6", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-8sqm9
	I0213 22:11:53.378933       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"3fe39de2-42de-49e8-a7f0-5128e694cec3", APIVersion:"batch/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0213 22:11:53.405060       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"bde81ecf-8d5f-4254-a039-056b1a8769a6", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0213 22:14:31.755176       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"3d194c76-7efa-433c-a6c0-85c1c5360b9d", APIVersion:"apps/v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0213 22:14:31.781856       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"42415a4c-5a56-4230-a4af-e29763e54ef9", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-msfxt
	E0213 22:14:50.826856       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-zkdvg" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [c4b0341610a4c2ed431fbf8e24b90da81863880a43e66abaf06df5d42382a3c1] <==
	W0213 22:11:12.750618       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0213 22:11:12.759350       1 node.go:136] Successfully retrieved node IP: 192.168.39.71
	I0213 22:11:12.759457       1 server_others.go:186] Using iptables Proxier.
	I0213 22:11:12.759849       1 server.go:583] Version: v1.18.20
	I0213 22:11:12.763684       1 config.go:315] Starting service config controller
	I0213 22:11:12.763725       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0213 22:11:12.763790       1 config.go:133] Starting endpoints config controller
	I0213 22:11:12.763840       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0213 22:11:12.864245       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0213 22:11:12.864382       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [45d9df674179190ee87bda02afad619b502d254ce8635dde2013e34a006d4f79] <==
	I0213 22:10:51.501847       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0213 22:10:51.502168       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0213 22:10:51.502349       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:10:51.507220       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0213 22:10:51.508546       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 22:10:51.508787       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 22:10:51.508827       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 22:10:51.509014       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:10:51.509154       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:10:51.509242       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:10:51.509449       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:10:51.509528       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 22:10:51.509698       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:10:51.509949       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 22:10:51.510028       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 22:10:51.510047       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:10:52.351018       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:10:52.426121       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:10:52.431703       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:10:52.479293       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:10:52.510624       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 22:10:52.667780       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:10:52.709020       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 22:10:52.863244       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0213 22:10:55.707591       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 22:10:21 UTC, ends at Tue 2024-02-13 22:14:54 UTC. --
	Feb 13 22:11:54 ingress-addon-legacy-741217 kubelet[1421]: W0213 22:11:54.380779    1421 pod_container_deletor.go:77] Container "b2d26896a0f4c25a6e7a95a7c0e1cc916af42926b994e45e63b6be7ced67f924" not found in pod's containers
	Feb 13 22:11:54 ingress-addon-legacy-741217 kubelet[1421]: W0213 22:11:54.391714    1421 pod_container_deletor.go:77] Container "f3ad70107dd80ddab1907ea574febf1a601ac24b5bfc3d241818f7552a039904" not found in pod's containers
	Feb 13 22:11:55 ingress-addon-legacy-741217 kubelet[1421]: E0213 22:11:55.393386    1421 cadvisor_stats_provider.go:400] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/pod5d90ce00-2b40-441c-b1a8-f24eed4ad07f": RecentStats: unable to find data in memory cache], ["/kubepods/besteffort/pod5d90ce00-2b40-441c-b1a8-f24eed4ad07f/crio-conmon-f3ad70107dd80ddab1907ea574febf1a601ac24b5bfc3d241818f7552a039904": RecentStats: unable to find data in memory cache]
	Feb 13 22:12:03 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:12:03.690196    1421 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Feb 13 22:12:03 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:12:03.840272    1421 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-ndztv" (UniqueName: "kubernetes.io/secret/77d27906-ea21-4212-bc1a-e902166cfcfb-minikube-ingress-dns-token-ndztv") pod "kube-ingress-dns-minikube" (UID: "77d27906-ea21-4212-bc1a-e902166cfcfb")
	Feb 13 22:12:12 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:12:12.066763    1421 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Feb 13 22:12:12 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:12:12.170075    1421 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rdjc9" (UniqueName: "kubernetes.io/secret/d14fae32-392d-4e2b-b4b0-41613ce45b7f-default-token-rdjc9") pod "nginx" (UID: "d14fae32-392d-4e2b-b4b0-41613ce45b7f")
	Feb 13 22:14:31 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:31.796667    1421 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Feb 13 22:14:31 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:31.870916    1421 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rdjc9" (UniqueName: "kubernetes.io/secret/38636c2d-a30c-48ad-8f59-9ed6115f51d3-default-token-rdjc9") pod "hello-world-app-5f5d8b66bb-msfxt" (UID: "38636c2d-a30c-48ad-8f59-9ed6115f51d3")
	Feb 13 22:14:33 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:33.828442    1421 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dcba04635ea97a915e1cf4fd00eea3263a388d98031867beefc2ace841cf0442
	Feb 13 22:14:33 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:33.877331    1421 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-ndztv" (UniqueName: "kubernetes.io/secret/77d27906-ea21-4212-bc1a-e902166cfcfb-minikube-ingress-dns-token-ndztv") pod "77d27906-ea21-4212-bc1a-e902166cfcfb" (UID: "77d27906-ea21-4212-bc1a-e902166cfcfb")
	Feb 13 22:14:33 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:33.882298    1421 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d27906-ea21-4212-bc1a-e902166cfcfb-minikube-ingress-dns-token-ndztv" (OuterVolumeSpecName: "minikube-ingress-dns-token-ndztv") pod "77d27906-ea21-4212-bc1a-e902166cfcfb" (UID: "77d27906-ea21-4212-bc1a-e902166cfcfb"). InnerVolumeSpecName "minikube-ingress-dns-token-ndztv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:14:33 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:33.963416    1421 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: dcba04635ea97a915e1cf4fd00eea3263a388d98031867beefc2ace841cf0442
	Feb 13 22:14:33 ingress-addon-legacy-741217 kubelet[1421]: E0213 22:14:33.964588    1421 remote_runtime.go:295] ContainerStatus "dcba04635ea97a915e1cf4fd00eea3263a388d98031867beefc2ace841cf0442" from runtime service failed: rpc error: code = NotFound desc = could not find container "dcba04635ea97a915e1cf4fd00eea3263a388d98031867beefc2ace841cf0442": container with ID starting with dcba04635ea97a915e1cf4fd00eea3263a388d98031867beefc2ace841cf0442 not found: ID does not exist
	Feb 13 22:14:33 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:33.979586    1421 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-ndztv" (UniqueName: "kubernetes.io/secret/77d27906-ea21-4212-bc1a-e902166cfcfb-minikube-ingress-dns-token-ndztv") on node "ingress-addon-legacy-741217" DevicePath ""
	Feb 13 22:14:46 ingress-addon-legacy-741217 kubelet[1421]: E0213 22:14:46.080839    1421 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-h46bb.17b38be1e7d61a26", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-h46bb", UID:"de89edf9-85ee-4998-b08a-768a0a423022", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-741217"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b185584973e26, ext:231582432606, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b185584973e26, ext:231582432606, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-h46bb.17b38be1e7d61a26" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 13 22:14:46 ingress-addon-legacy-741217 kubelet[1421]: E0213 22:14:46.103646    1421 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-h46bb.17b38be1e7d61a26", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-h46bb", UID:"de89edf9-85ee-4998-b08a-768a0a423022", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-741217"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b185584973e26, ext:231582432606, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b185585699539, ext:231596217456, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-h46bb.17b38be1e7d61a26" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 13 22:14:48 ingress-addon-legacy-741217 kubelet[1421]: W0213 22:14:48.885331    1421 pod_container_deletor.go:77] Container "1b8e6e2292ae729cbed72ded4bec3f6105c1ee2815ee1a4a4f35a2f704e37a5e" not found in pod's containers
	Feb 13 22:14:50 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:50.138039    1421 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/de89edf9-85ee-4998-b08a-768a0a423022-webhook-cert") pod "de89edf9-85ee-4998-b08a-768a0a423022" (UID: "de89edf9-85ee-4998-b08a-768a0a423022")
	Feb 13 22:14:50 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:50.138085    1421 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-f44nr" (UniqueName: "kubernetes.io/secret/de89edf9-85ee-4998-b08a-768a0a423022-ingress-nginx-token-f44nr") pod "de89edf9-85ee-4998-b08a-768a0a423022" (UID: "de89edf9-85ee-4998-b08a-768a0a423022")
	Feb 13 22:14:50 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:50.143756    1421 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de89edf9-85ee-4998-b08a-768a0a423022-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "de89edf9-85ee-4998-b08a-768a0a423022" (UID: "de89edf9-85ee-4998-b08a-768a0a423022"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:14:50 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:50.144176    1421 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de89edf9-85ee-4998-b08a-768a0a423022-ingress-nginx-token-f44nr" (OuterVolumeSpecName: "ingress-nginx-token-f44nr") pod "de89edf9-85ee-4998-b08a-768a0a423022" (UID: "de89edf9-85ee-4998-b08a-768a0a423022"). InnerVolumeSpecName "ingress-nginx-token-f44nr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 13 22:14:50 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:50.238545    1421 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/de89edf9-85ee-4998-b08a-768a0a423022-webhook-cert") on node "ingress-addon-legacy-741217" DevicePath ""
	Feb 13 22:14:50 ingress-addon-legacy-741217 kubelet[1421]: I0213 22:14:50.238619    1421 reconciler.go:319] Volume detached for volume "ingress-nginx-token-f44nr" (UniqueName: "kubernetes.io/secret/de89edf9-85ee-4998-b08a-768a0a423022-ingress-nginx-token-f44nr") on node "ingress-addon-legacy-741217" DevicePath ""
	Feb 13 22:14:51 ingress-addon-legacy-741217 kubelet[1421]: W0213 22:14:51.117946    1421 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/de89edf9-85ee-4998-b08a-768a0a423022/volumes" does not exist
	
	
	==> storage-provisioner [32539e2eeae6586689f9d53ab02c1f21743eff78b57e6ba758282e9a04aebf8c] <==
	I0213 22:11:42.475718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:11:42.490873       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:11:42.491931       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:11:42.504834       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:11:42.505711       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-741217_9efd1e44-134e-4093-a674-31c0a1207405!
	I0213 22:11:42.505953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5d37def-797b-429c-86b4-7beb01ebfefc", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-741217_9efd1e44-134e-4093-a674-31c0a1207405 became leader
	I0213 22:11:42.606577       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-741217_9efd1e44-134e-4093-a674-31c0a1207405!
	
	
	==> storage-provisioner [a12cbf121e73f9df1e05a5458e240ecea899d4edd9ed8f99869213bd208fb6a8] <==
	I0213 22:11:11.623380       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0213 22:11:41.625332       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-741217 -n ingress-addon-legacy-741217
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-741217 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (170.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (690.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-413653
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-413653
E0213 22:24:11.137100   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:24:21.414209   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-413653: exit status 82 (2m0.279682186s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-413653"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-413653" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-413653 --wait=true -v=8 --alsologtostderr
E0213 22:25:44.461257   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:27:03.710850   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:29:11.137132   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:29:21.413486   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:30:34.183729   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:32:03.710875   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:33:26.757656   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:34:11.137317   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:34:21.413658   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-413653 --wait=true -v=8 --alsologtostderr: (9m27.609442433s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-413653
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-413653 -n multinode-413653
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-413653 logs -n 25: (1.619342339s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp multinode-413653-m02:/home/docker/cp-test.txt                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3140751563/001/cp-test_multinode-413653-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp multinode-413653-m02:/home/docker/cp-test.txt                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653:/home/docker/cp-test_multinode-413653-m02_multinode-413653.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n multinode-413653 sudo cat                                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | /home/docker/cp-test_multinode-413653-m02_multinode-413653.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp multinode-413653-m02:/home/docker/cp-test.txt                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m03:/home/docker/cp-test_multinode-413653-m02_multinode-413653-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n multinode-413653-m03 sudo cat                                   | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | /home/docker/cp-test_multinode-413653-m02_multinode-413653-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp testdata/cp-test.txt                                                | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp multinode-413653-m03:/home/docker/cp-test.txt                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3140751563/001/cp-test_multinode-413653-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp multinode-413653-m03:/home/docker/cp-test.txt                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653:/home/docker/cp-test_multinode-413653-m03_multinode-413653.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n multinode-413653 sudo cat                                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | /home/docker/cp-test_multinode-413653-m03_multinode-413653.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-413653 cp multinode-413653-m03:/home/docker/cp-test.txt                       | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m02:/home/docker/cp-test_multinode-413653-m03_multinode-413653-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n                                                                 | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | multinode-413653-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-413653 ssh -n multinode-413653-m02 sudo cat                                   | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	|         | /home/docker/cp-test_multinode-413653-m03_multinode-413653-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-413653 node stop m03                                                          | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:22 UTC |
	| node    | multinode-413653 node start                                                             | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:22 UTC | 13 Feb 24 22:23 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-413653                                                                | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:23 UTC |                     |
	| stop    | -p multinode-413653                                                                     | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:23 UTC |                     |
	| start   | -p multinode-413653                                                                     | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:25 UTC | 13 Feb 24 22:34 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-413653                                                                | multinode-413653 | jenkins | v1.32.0 | 13 Feb 24 22:34 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 22:25:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 22:25:01.601735   32908 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:25:01.602037   32908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:25:01.602047   32908 out.go:304] Setting ErrFile to fd 2...
	I0213 22:25:01.602052   32908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:25:01.602238   32908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 22:25:01.602844   32908 out.go:298] Setting JSON to false
	I0213 22:25:01.603754   32908 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4053,"bootTime":1707859049,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:25:01.603840   32908 start.go:138] virtualization: kvm guest
	I0213 22:25:01.606531   32908 out.go:177] * [multinode-413653] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 22:25:01.607931   32908 notify.go:220] Checking for updates...
	I0213 22:25:01.609395   32908 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:25:01.610959   32908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:25:01.612435   32908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:25:01.613864   32908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:25:01.615170   32908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:25:01.616626   32908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:25:01.618399   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:25:01.618503   32908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:25:01.618898   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:25:01.618939   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:25:01.634389   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46063
	I0213 22:25:01.634781   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:25:01.635221   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:25:01.635244   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:25:01.635536   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:25:01.635692   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:25:01.672235   32908 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 22:25:01.673541   32908 start.go:298] selected driver: kvm2
	I0213 22:25:01.673558   32908 start.go:902] validating driver "kvm2" against &{Name:multinode-413653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:25:01.673686   32908 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:25:01.674008   32908 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 22:25:01.674109   32908 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 22:25:01.688420   32908 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 22:25:01.689106   32908 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 22:25:01.689172   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:25:01.689187   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:25:01.689197   32908 start_flags.go:321] config:
	{Name:multinode-413653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-413653 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:25:01.689415   32908 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 22:25:01.691958   32908 out.go:177] * Starting control plane node multinode-413653 in cluster multinode-413653
	I0213 22:25:01.693394   32908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 22:25:01.693450   32908 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 22:25:01.693471   32908 cache.go:56] Caching tarball of preloaded images
	I0213 22:25:01.693568   32908 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 22:25:01.693582   32908 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 22:25:01.693736   32908 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/config.json ...
	I0213 22:25:01.693995   32908 start.go:365] acquiring machines lock for multinode-413653: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 22:25:01.694048   32908 start.go:369] acquired machines lock for "multinode-413653" in 25.387µs
	I0213 22:25:01.694068   32908 start.go:96] Skipping create...Using existing machine configuration
	I0213 22:25:01.694077   32908 fix.go:54] fixHost starting: 
	I0213 22:25:01.694374   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:25:01.694409   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:25:01.708463   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0213 22:25:01.708865   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:25:01.709357   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:25:01.709385   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:25:01.710022   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:25:01.711185   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:25:01.711400   32908 main.go:141] libmachine: (multinode-413653) Calling .GetState
	I0213 22:25:01.713018   32908 fix.go:102] recreateIfNeeded on multinode-413653: state=Running err=<nil>
	W0213 22:25:01.713064   32908 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 22:25:01.715881   32908 out.go:177] * Updating the running kvm2 "multinode-413653" VM ...
	I0213 22:25:01.717373   32908 machine.go:88] provisioning docker machine ...
	I0213 22:25:01.717414   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:25:01.717661   32908 main.go:141] libmachine: (multinode-413653) Calling .GetMachineName
	I0213 22:25:01.717818   32908 buildroot.go:166] provisioning hostname "multinode-413653"
	I0213 22:25:01.717837   32908 main.go:141] libmachine: (multinode-413653) Calling .GetMachineName
	I0213 22:25:01.717979   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:25:01.720564   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:25:01.721006   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:25:01.721029   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:25:01.721219   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:25:01.721415   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:25:01.721588   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:25:01.721725   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:25:01.721909   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:25:01.722444   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0213 22:25:01.722465   32908 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-413653 && echo "multinode-413653" | sudo tee /etc/hostname
	I0213 22:25:20.150170   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:26.230196   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:29.302191   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:35.382172   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:38.454178   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:44.534160   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:47.606196   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:53.690184   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:25:56.758125   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:02.838140   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:05.910183   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:11.990182   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:15.062144   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:21.142191   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:24.214262   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:30.294210   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:33.366104   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:39.446193   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:42.518198   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:48.598159   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:51.670234   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:26:57.750202   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:00.822244   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:06.902195   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:09.974278   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:16.054187   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:19.126150   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:25.206200   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:28.278166   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:34.358223   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:37.430204   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:43.510165   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:46.582241   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:52.662203   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:27:55.734220   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:01.814182   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:04.886165   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:10.966234   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:14.038164   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:20.118164   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:23.190202   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:29.270182   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:32.342155   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:38.422174   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:41.494122   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:47.574166   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:50.646158   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:56.726178   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:28:59.798140   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:05.878257   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:08.950240   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:15.030200   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:18.102226   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:24.182113   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:27.254250   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:33.334199   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:36.406176   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:42.486156   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:45.558181   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:51.638218   32908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.81:22: connect: no route to host
	I0213 22:29:54.639064   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 22:29:54.639101   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:29:54.641161   32908 machine.go:91] provisioned docker machine in 4m52.9237654s
	I0213 22:29:54.641225   32908 fix.go:56] fixHost completed within 4m52.947147736s
	I0213 22:29:54.641238   32908 start.go:83] releasing machines lock for "multinode-413653", held for 4m52.94717732s
	W0213 22:29:54.641255   32908 start.go:694] error starting host: provision: host is not running
	W0213 22:29:54.641359   32908 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0213 22:29:54.641369   32908 start.go:709] Will try again in 5 seconds ...
	I0213 22:29:59.643424   32908 start.go:365] acquiring machines lock for multinode-413653: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 22:29:59.643514   32908 start.go:369] acquired machines lock for "multinode-413653" in 53.798µs
	I0213 22:29:59.643534   32908 start.go:96] Skipping create...Using existing machine configuration
	I0213 22:29:59.643542   32908 fix.go:54] fixHost starting: 
	I0213 22:29:59.643842   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:29:59.643863   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:29:59.658232   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40957
	I0213 22:29:59.658708   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:29:59.659235   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:29:59.659259   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:29:59.659585   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:29:59.659794   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:29:59.660021   32908 main.go:141] libmachine: (multinode-413653) Calling .GetState
	I0213 22:29:59.661898   32908 fix.go:102] recreateIfNeeded on multinode-413653: state=Stopped err=<nil>
	I0213 22:29:59.661924   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	W0213 22:29:59.662125   32908 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 22:29:59.664767   32908 out.go:177] * Restarting existing kvm2 VM for "multinode-413653" ...
	I0213 22:29:59.666408   32908 main.go:141] libmachine: (multinode-413653) Calling .Start
	I0213 22:29:59.666585   32908 main.go:141] libmachine: (multinode-413653) Ensuring networks are active...
	I0213 22:29:59.667377   32908 main.go:141] libmachine: (multinode-413653) Ensuring network default is active
	I0213 22:29:59.667724   32908 main.go:141] libmachine: (multinode-413653) Ensuring network mk-multinode-413653 is active
	I0213 22:29:59.668132   32908 main.go:141] libmachine: (multinode-413653) Getting domain xml...
	I0213 22:29:59.668878   32908 main.go:141] libmachine: (multinode-413653) Creating domain...
	I0213 22:30:00.881671   32908 main.go:141] libmachine: (multinode-413653) Waiting to get IP...
	I0213 22:30:00.882581   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:00.883030   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:00.883130   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:00.883024   33708 retry.go:31] will retry after 237.185527ms: waiting for machine to come up
	I0213 22:30:01.121462   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:01.121967   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:01.122005   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:01.121928   33708 retry.go:31] will retry after 350.599086ms: waiting for machine to come up
	I0213 22:30:01.474616   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:01.475197   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:01.475228   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:01.475131   33708 retry.go:31] will retry after 436.869189ms: waiting for machine to come up
	I0213 22:30:01.913893   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:01.914379   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:01.914411   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:01.914326   33708 retry.go:31] will retry after 377.54971ms: waiting for machine to come up
	I0213 22:30:02.293860   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:02.294294   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:02.294317   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:02.294243   33708 retry.go:31] will retry after 591.628388ms: waiting for machine to come up
	I0213 22:30:02.887048   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:02.887498   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:02.887533   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:02.887475   33708 retry.go:31] will retry after 584.17119ms: waiting for machine to come up
	I0213 22:30:03.473488   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:03.473930   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:03.473960   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:03.473894   33708 retry.go:31] will retry after 817.613027ms: waiting for machine to come up
	I0213 22:30:04.292666   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:04.293128   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:04.293160   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:04.293068   33708 retry.go:31] will retry after 1.21684682s: waiting for machine to come up
	I0213 22:30:05.511350   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:05.511872   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:05.511896   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:05.511826   33708 retry.go:31] will retry after 1.499850618s: waiting for machine to come up
	I0213 22:30:07.013517   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:07.014164   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:07.014202   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:07.014091   33708 retry.go:31] will retry after 1.765234769s: waiting for machine to come up
	I0213 22:30:08.781215   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:08.781783   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:08.781817   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:08.781740   33708 retry.go:31] will retry after 1.778526671s: waiting for machine to come up
	I0213 22:30:10.561470   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:10.561995   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:10.562025   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:10.561943   33708 retry.go:31] will retry after 2.222264824s: waiting for machine to come up
	I0213 22:30:12.787404   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:12.787946   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:12.787974   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:12.787899   33708 retry.go:31] will retry after 2.906245468s: waiting for machine to come up
	I0213 22:30:15.696295   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:15.696719   32908 main.go:141] libmachine: (multinode-413653) DBG | unable to find current IP address of domain multinode-413653 in network mk-multinode-413653
	I0213 22:30:15.696743   32908 main.go:141] libmachine: (multinode-413653) DBG | I0213 22:30:15.696670   33708 retry.go:31] will retry after 4.600381997s: waiting for machine to come up
	I0213 22:30:20.298748   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.299261   32908 main.go:141] libmachine: (multinode-413653) Found IP for machine: 192.168.39.81
	I0213 22:30:20.299290   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has current primary IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.299306   32908 main.go:141] libmachine: (multinode-413653) Reserving static IP address...
	I0213 22:30:20.299745   32908 main.go:141] libmachine: (multinode-413653) Reserved static IP address: 192.168.39.81
	I0213 22:30:20.299762   32908 main.go:141] libmachine: (multinode-413653) Waiting for SSH to be available...
	I0213 22:30:20.299779   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "multinode-413653", mac: "52:54:00:cc:d7:5b", ip: "192.168.39.81"} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.299829   32908 main.go:141] libmachine: (multinode-413653) DBG | skip adding static IP to network mk-multinode-413653 - found existing host DHCP lease matching {name: "multinode-413653", mac: "52:54:00:cc:d7:5b", ip: "192.168.39.81"}
	I0213 22:30:20.299851   32908 main.go:141] libmachine: (multinode-413653) DBG | Getting to WaitForSSH function...
	I0213 22:30:20.302084   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.302402   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.302434   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.302560   32908 main.go:141] libmachine: (multinode-413653) DBG | Using SSH client type: external
	I0213 22:30:20.302586   32908 main.go:141] libmachine: (multinode-413653) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa (-rw-------)
	I0213 22:30:20.302622   32908 main.go:141] libmachine: (multinode-413653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.81 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 22:30:20.302641   32908 main.go:141] libmachine: (multinode-413653) DBG | About to run SSH command:
	I0213 22:30:20.302653   32908 main.go:141] libmachine: (multinode-413653) DBG | exit 0
	I0213 22:30:20.398217   32908 main.go:141] libmachine: (multinode-413653) DBG | SSH cmd err, output: <nil>: 
	I0213 22:30:20.398605   32908 main.go:141] libmachine: (multinode-413653) Calling .GetConfigRaw
	I0213 22:30:20.399153   32908 main.go:141] libmachine: (multinode-413653) Calling .GetIP
	I0213 22:30:20.401392   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.401756   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.401790   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.402089   32908 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/config.json ...
	I0213 22:30:20.402394   32908 machine.go:88] provisioning docker machine ...
	I0213 22:30:20.402445   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:30:20.402811   32908 main.go:141] libmachine: (multinode-413653) Calling .GetMachineName
	I0213 22:30:20.403053   32908 buildroot.go:166] provisioning hostname "multinode-413653"
	I0213 22:30:20.403081   32908 main.go:141] libmachine: (multinode-413653) Calling .GetMachineName
	I0213 22:30:20.403332   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:20.405400   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.405809   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.405835   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.405948   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:20.406117   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:20.406231   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:20.406361   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:20.406545   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:30:20.406889   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0213 22:30:20.406903   32908 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-413653 && echo "multinode-413653" | sudo tee /etc/hostname
	I0213 22:30:20.550921   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-413653
	
	I0213 22:30:20.550956   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:20.554187   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.554551   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.554581   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.554745   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:20.554950   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:20.555181   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:20.555363   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:20.555562   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:30:20.555912   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0213 22:30:20.555942   32908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-413653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-413653/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-413653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 22:30:20.694364   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 22:30:20.694393   32908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 22:30:20.694439   32908 buildroot.go:174] setting up certificates
	I0213 22:30:20.694454   32908 provision.go:83] configureAuth start
	I0213 22:30:20.694467   32908 main.go:141] libmachine: (multinode-413653) Calling .GetMachineName
	I0213 22:30:20.694722   32908 main.go:141] libmachine: (multinode-413653) Calling .GetIP
	I0213 22:30:20.697398   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.697790   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.697821   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.697957   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:20.700162   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.700410   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.700438   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.700553   32908 provision.go:138] copyHostCerts
	I0213 22:30:20.700585   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:30:20.700630   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 22:30:20.700651   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:30:20.700735   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 22:30:20.700842   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:30:20.700873   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 22:30:20.700883   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:30:20.700924   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 22:30:20.700984   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:30:20.701008   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 22:30:20.701017   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:30:20.701049   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 22:30:20.701114   32908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.multinode-413653 san=[192.168.39.81 192.168.39.81 localhost 127.0.0.1 minikube multinode-413653]
	I0213 22:30:20.879066   32908 provision.go:172] copyRemoteCerts
	I0213 22:30:20.879133   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 22:30:20.879164   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:20.881824   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.882273   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:20.882314   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:20.882524   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:20.882714   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:20.882870   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:20.882992   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:30:20.975914   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 22:30:20.975988   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 22:30:20.999886   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 22:30:20.999975   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 22:30:21.023455   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 22:30:21.023540   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0213 22:30:21.046668   32908 provision.go:86] duration metric: configureAuth took 352.198932ms
	I0213 22:30:21.046694   32908 buildroot.go:189] setting minikube options for container-runtime
	I0213 22:30:21.046903   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:30:21.046972   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:21.049913   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.050334   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:21.050364   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.050541   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:21.050763   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.050930   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.051062   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:21.051263   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:30:21.051747   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0213 22:30:21.051772   32908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 22:30:21.387576   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 22:30:21.387619   32908 machine.go:91] provisioned docker machine in 985.200085ms
	I0213 22:30:21.387631   32908 start.go:300] post-start starting for "multinode-413653" (driver="kvm2")
	I0213 22:30:21.387662   32908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 22:30:21.387693   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:30:21.388057   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 22:30:21.388092   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:21.390679   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.391020   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:21.391037   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.391179   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:21.391358   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.391506   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:21.391626   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:30:21.488917   32908 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 22:30:21.493163   32908 command_runner.go:130] > NAME=Buildroot
	I0213 22:30:21.493199   32908 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0213 22:30:21.493207   32908 command_runner.go:130] > ID=buildroot
	I0213 22:30:21.493214   32908 command_runner.go:130] > VERSION_ID=2021.02.12
	I0213 22:30:21.493219   32908 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0213 22:30:21.493247   32908 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 22:30:21.493259   32908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 22:30:21.493327   32908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 22:30:21.493401   32908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 22:30:21.493411   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /etc/ssl/certs/162002.pem
	I0213 22:30:21.493486   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 22:30:21.503190   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:30:21.528619   32908 start.go:303] post-start completed in 140.970658ms
	I0213 22:30:21.528650   32908 fix.go:56] fixHost completed within 21.88510705s
	I0213 22:30:21.528671   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:21.531046   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.531406   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:21.531440   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.531579   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:21.531788   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.531903   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.532043   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:21.532204   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:30:21.532564   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I0213 22:30:21.532578   32908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 22:30:21.662816   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707863421.612624957
	
	I0213 22:30:21.662835   32908 fix.go:206] guest clock: 1707863421.612624957
	I0213 22:30:21.662842   32908 fix.go:219] Guest: 2024-02-13 22:30:21.612624957 +0000 UTC Remote: 2024-02-13 22:30:21.528654302 +0000 UTC m=+319.975369611 (delta=83.970655ms)
	I0213 22:30:21.662860   32908 fix.go:190] guest clock delta is within tolerance: 83.970655ms
	I0213 22:30:21.662864   32908 start.go:83] releasing machines lock for "multinode-413653", held for 22.019343064s
	I0213 22:30:21.662881   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:30:21.663137   32908 main.go:141] libmachine: (multinode-413653) Calling .GetIP
	I0213 22:30:21.665680   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.666122   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:21.666175   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.666343   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:30:21.666817   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:30:21.666993   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:30:21.667072   32908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 22:30:21.667109   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:21.667172   32908 ssh_runner.go:195] Run: cat /version.json
	I0213 22:30:21.667198   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:30:21.669445   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.669752   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.669782   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:21.669800   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.669940   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:21.670133   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.670208   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:21.670233   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:21.670298   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:21.670341   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:30:21.670410   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:30:21.670482   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:30:21.670611   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:30:21.670708   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:30:21.759171   32908 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0213 22:30:21.759465   32908 ssh_runner.go:195] Run: systemctl --version
	I0213 22:30:21.785439   32908 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0213 22:30:21.785625   32908 command_runner.go:130] > systemd 247 (247)
	I0213 22:30:21.785669   32908 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0213 22:30:21.785734   32908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 22:30:21.925061   32908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 22:30:21.931348   32908 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0213 22:30:21.931742   32908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 22:30:21.931814   32908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 22:30:21.945880   32908 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0213 22:30:21.946180   32908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 22:30:21.946200   32908 start.go:475] detecting cgroup driver to use...
	I0213 22:30:21.946277   32908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 22:30:21.962690   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 22:30:21.975633   32908 docker.go:217] disabling cri-docker service (if available) ...
	I0213 22:30:21.975708   32908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 22:30:21.988185   32908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 22:30:22.002316   32908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 22:30:22.105647   32908 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0213 22:30:22.105732   32908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 22:30:22.120541   32908 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0213 22:30:22.235092   32908 docker.go:233] disabling docker service ...
	I0213 22:30:22.235173   32908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 22:30:22.250158   32908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 22:30:22.262753   32908 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0213 22:30:22.263083   32908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 22:30:22.377390   32908 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0213 22:30:22.377472   32908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 22:30:22.391240   32908 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0213 22:30:22.391653   32908 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0213 22:30:22.496947   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 22:30:22.510841   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 22:30:22.528580   32908 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0213 22:30:22.528882   32908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 22:30:22.528955   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:30:22.538928   32908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 22:30:22.538993   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:30:22.548885   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:30:22.558644   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:30:22.568226   32908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 22:30:22.577944   32908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 22:30:22.586243   32908 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 22:30:22.586503   32908 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 22:30:22.586556   32908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 22:30:22.598705   32908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 22:30:22.608317   32908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 22:30:22.725464   32908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 22:30:22.892860   32908 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 22:30:22.892947   32908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 22:30:22.898548   32908 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0213 22:30:22.898589   32908 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0213 22:30:22.898600   32908 command_runner.go:130] > Device: 16h/22d	Inode: 848         Links: 1
	I0213 22:30:22.898612   32908 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0213 22:30:22.898621   32908 command_runner.go:130] > Access: 2024-02-13 22:30:22.825425470 +0000
	I0213 22:30:22.898632   32908 command_runner.go:130] > Modify: 2024-02-13 22:30:22.825425470 +0000
	I0213 22:30:22.898643   32908 command_runner.go:130] > Change: 2024-02-13 22:30:22.825425470 +0000
	I0213 22:30:22.898651   32908 command_runner.go:130] >  Birth: -
	I0213 22:30:22.898718   32908 start.go:543] Will wait 60s for crictl version
	I0213 22:30:22.898775   32908 ssh_runner.go:195] Run: which crictl
	I0213 22:30:22.902552   32908 command_runner.go:130] > /usr/bin/crictl
	I0213 22:30:22.902638   32908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 22:30:22.938302   32908 command_runner.go:130] > Version:  0.1.0
	I0213 22:30:22.938331   32908 command_runner.go:130] > RuntimeName:  cri-o
	I0213 22:30:22.938339   32908 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0213 22:30:22.938347   32908 command_runner.go:130] > RuntimeApiVersion:  v1
	I0213 22:30:22.938368   32908 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 22:30:22.938447   32908 ssh_runner.go:195] Run: crio --version
	I0213 22:30:22.985792   32908 command_runner.go:130] > crio version 1.24.1
	I0213 22:30:22.985824   32908 command_runner.go:130] > Version:          1.24.1
	I0213 22:30:22.985834   32908 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0213 22:30:22.985849   32908 command_runner.go:130] > GitTreeState:     dirty
	I0213 22:30:22.985859   32908 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0213 22:30:22.985878   32908 command_runner.go:130] > GoVersion:        go1.19.9
	I0213 22:30:22.985886   32908 command_runner.go:130] > Compiler:         gc
	I0213 22:30:22.985893   32908 command_runner.go:130] > Platform:         linux/amd64
	I0213 22:30:22.985901   32908 command_runner.go:130] > Linkmode:         dynamic
	I0213 22:30:22.985912   32908 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0213 22:30:22.985919   32908 command_runner.go:130] > SeccompEnabled:   true
	I0213 22:30:22.985926   32908 command_runner.go:130] > AppArmorEnabled:  false
	I0213 22:30:22.986008   32908 ssh_runner.go:195] Run: crio --version
	I0213 22:30:23.027842   32908 command_runner.go:130] > crio version 1.24.1
	I0213 22:30:23.027873   32908 command_runner.go:130] > Version:          1.24.1
	I0213 22:30:23.027883   32908 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0213 22:30:23.027889   32908 command_runner.go:130] > GitTreeState:     dirty
	I0213 22:30:23.027898   32908 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0213 22:30:23.027906   32908 command_runner.go:130] > GoVersion:        go1.19.9
	I0213 22:30:23.027913   32908 command_runner.go:130] > Compiler:         gc
	I0213 22:30:23.027926   32908 command_runner.go:130] > Platform:         linux/amd64
	I0213 22:30:23.027937   32908 command_runner.go:130] > Linkmode:         dynamic
	I0213 22:30:23.027948   32908 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0213 22:30:23.027973   32908 command_runner.go:130] > SeccompEnabled:   true
	I0213 22:30:23.027987   32908 command_runner.go:130] > AppArmorEnabled:  false
	I0213 22:30:23.029812   32908 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 22:30:23.031069   32908 main.go:141] libmachine: (multinode-413653) Calling .GetIP
	I0213 22:30:23.033753   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:23.034187   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:30:23.034213   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:30:23.034372   32908 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 22:30:23.038269   32908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 22:30:23.049717   32908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 22:30:23.049775   32908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 22:30:23.089570   32908 command_runner.go:130] > {
	I0213 22:30:23.089599   32908 command_runner.go:130] >   "images": [
	I0213 22:30:23.089605   32908 command_runner.go:130] >     {
	I0213 22:30:23.089618   32908 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0213 22:30:23.089625   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:23.089633   32908 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0213 22:30:23.089639   32908 command_runner.go:130] >       ],
	I0213 22:30:23.089646   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:23.089677   32908 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0213 22:30:23.089692   32908 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0213 22:30:23.089697   32908 command_runner.go:130] >       ],
	I0213 22:30:23.089703   32908 command_runner.go:130] >       "size": "750414",
	I0213 22:30:23.089707   32908 command_runner.go:130] >       "uid": {
	I0213 22:30:23.089712   32908 command_runner.go:130] >         "value": "65535"
	I0213 22:30:23.089716   32908 command_runner.go:130] >       },
	I0213 22:30:23.089723   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:23.089732   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:23.089738   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:23.089741   32908 command_runner.go:130] >     }
	I0213 22:30:23.089747   32908 command_runner.go:130] >   ]
	I0213 22:30:23.089751   32908 command_runner.go:130] > }
	I0213 22:30:23.090819   32908 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 22:30:23.090891   32908 ssh_runner.go:195] Run: which lz4
	I0213 22:30:23.094549   32908 command_runner.go:130] > /usr/bin/lz4
	I0213 22:30:23.094748   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0213 22:30:23.094840   32908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 22:30:23.099031   32908 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 22:30:23.099098   32908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 22:30:23.099127   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 22:30:24.989377   32908 crio.go:444] Took 1.894566 seconds to copy over tarball
	I0213 22:30:24.989482   32908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 22:30:27.819021   32908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.829512364s)
	I0213 22:30:27.819046   32908 crio.go:451] Took 2.829623 seconds to extract the tarball
	I0213 22:30:27.819054   32908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 22:30:27.860401   32908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 22:30:27.914100   32908 command_runner.go:130] > {
	I0213 22:30:27.914121   32908 command_runner.go:130] >   "images": [
	I0213 22:30:27.914125   32908 command_runner.go:130] >     {
	I0213 22:30:27.914143   32908 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0213 22:30:27.914149   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914155   32908 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0213 22:30:27.914158   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914162   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914170   32908 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0213 22:30:27.914177   32908 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0213 22:30:27.914181   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914186   32908 command_runner.go:130] >       "size": "65258016",
	I0213 22:30:27.914193   32908 command_runner.go:130] >       "uid": null,
	I0213 22:30:27.914197   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914211   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914218   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914222   32908 command_runner.go:130] >     },
	I0213 22:30:27.914225   32908 command_runner.go:130] >     {
	I0213 22:30:27.914231   32908 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0213 22:30:27.914237   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914243   32908 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0213 22:30:27.914252   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914256   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914266   32908 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0213 22:30:27.914274   32908 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0213 22:30:27.914280   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914292   32908 command_runner.go:130] >       "size": "31470524",
	I0213 22:30:27.914298   32908 command_runner.go:130] >       "uid": null,
	I0213 22:30:27.914303   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914309   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914313   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914319   32908 command_runner.go:130] >     },
	I0213 22:30:27.914322   32908 command_runner.go:130] >     {
	I0213 22:30:27.914329   32908 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0213 22:30:27.914334   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914339   32908 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0213 22:30:27.914345   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914349   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914358   32908 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0213 22:30:27.914368   32908 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0213 22:30:27.914374   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914378   32908 command_runner.go:130] >       "size": "53621675",
	I0213 22:30:27.914385   32908 command_runner.go:130] >       "uid": null,
	I0213 22:30:27.914389   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914396   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914400   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914406   32908 command_runner.go:130] >     },
	I0213 22:30:27.914410   32908 command_runner.go:130] >     {
	I0213 22:30:27.914416   32908 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0213 22:30:27.914420   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914426   32908 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0213 22:30:27.914432   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914436   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914445   32908 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0213 22:30:27.914452   32908 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0213 22:30:27.914464   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914471   32908 command_runner.go:130] >       "size": "295456551",
	I0213 22:30:27.914477   32908 command_runner.go:130] >       "uid": {
	I0213 22:30:27.914484   32908 command_runner.go:130] >         "value": "0"
	I0213 22:30:27.914488   32908 command_runner.go:130] >       },
	I0213 22:30:27.914494   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914498   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914502   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914506   32908 command_runner.go:130] >     },
	I0213 22:30:27.914510   32908 command_runner.go:130] >     {
	I0213 22:30:27.914516   32908 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0213 22:30:27.914521   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914526   32908 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0213 22:30:27.914532   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914536   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914546   32908 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0213 22:30:27.914553   32908 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0213 22:30:27.914559   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914563   32908 command_runner.go:130] >       "size": "127226832",
	I0213 22:30:27.914572   32908 command_runner.go:130] >       "uid": {
	I0213 22:30:27.914581   32908 command_runner.go:130] >         "value": "0"
	I0213 22:30:27.914584   32908 command_runner.go:130] >       },
	I0213 22:30:27.914588   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914595   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914599   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914603   32908 command_runner.go:130] >     },
	I0213 22:30:27.914609   32908 command_runner.go:130] >     {
	I0213 22:30:27.914615   32908 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0213 22:30:27.914621   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914627   32908 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0213 22:30:27.914633   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914637   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914647   32908 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0213 22:30:27.914657   32908 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0213 22:30:27.914663   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914667   32908 command_runner.go:130] >       "size": "123261750",
	I0213 22:30:27.914671   32908 command_runner.go:130] >       "uid": {
	I0213 22:30:27.914675   32908 command_runner.go:130] >         "value": "0"
	I0213 22:30:27.914682   32908 command_runner.go:130] >       },
	I0213 22:30:27.914688   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914693   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914700   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914704   32908 command_runner.go:130] >     },
	I0213 22:30:27.914707   32908 command_runner.go:130] >     {
	I0213 22:30:27.914713   32908 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0213 22:30:27.914720   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914725   32908 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0213 22:30:27.914731   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914735   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914742   32908 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0213 22:30:27.914751   32908 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0213 22:30:27.914757   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914761   32908 command_runner.go:130] >       "size": "74749335",
	I0213 22:30:27.914768   32908 command_runner.go:130] >       "uid": null,
	I0213 22:30:27.914775   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914779   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914785   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914796   32908 command_runner.go:130] >     },
	I0213 22:30:27.914802   32908 command_runner.go:130] >     {
	I0213 22:30:27.914808   32908 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0213 22:30:27.914814   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914819   32908 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0213 22:30:27.914825   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914829   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914852   32908 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0213 22:30:27.914862   32908 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0213 22:30:27.914866   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914871   32908 command_runner.go:130] >       "size": "61551410",
	I0213 22:30:27.914877   32908 command_runner.go:130] >       "uid": {
	I0213 22:30:27.914881   32908 command_runner.go:130] >         "value": "0"
	I0213 22:30:27.914884   32908 command_runner.go:130] >       },
	I0213 22:30:27.914891   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914895   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914901   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.914907   32908 command_runner.go:130] >     },
	I0213 22:30:27.914913   32908 command_runner.go:130] >     {
	I0213 22:30:27.914919   32908 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0213 22:30:27.914925   32908 command_runner.go:130] >       "repoTags": [
	I0213 22:30:27.914930   32908 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0213 22:30:27.914935   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914939   32908 command_runner.go:130] >       "repoDigests": [
	I0213 22:30:27.914948   32908 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0213 22:30:27.914955   32908 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0213 22:30:27.914961   32908 command_runner.go:130] >       ],
	I0213 22:30:27.914965   32908 command_runner.go:130] >       "size": "750414",
	I0213 22:30:27.914972   32908 command_runner.go:130] >       "uid": {
	I0213 22:30:27.914976   32908 command_runner.go:130] >         "value": "65535"
	I0213 22:30:27.914983   32908 command_runner.go:130] >       },
	I0213 22:30:27.914990   32908 command_runner.go:130] >       "username": "",
	I0213 22:30:27.914995   32908 command_runner.go:130] >       "spec": null,
	I0213 22:30:27.914999   32908 command_runner.go:130] >       "pinned": false
	I0213 22:30:27.915003   32908 command_runner.go:130] >     }
	I0213 22:30:27.915009   32908 command_runner.go:130] >   ]
	I0213 22:30:27.915014   32908 command_runner.go:130] > }
	I0213 22:30:27.915442   32908 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 22:30:27.915462   32908 cache_images.go:84] Images are preloaded, skipping loading
	I0213 22:30:27.915530   32908 ssh_runner.go:195] Run: crio config
	I0213 22:30:27.963223   32908 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0213 22:30:27.963267   32908 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0213 22:30:27.963277   32908 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0213 22:30:27.963283   32908 command_runner.go:130] > #
	I0213 22:30:27.963298   32908 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0213 22:30:27.963309   32908 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0213 22:30:27.963319   32908 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0213 22:30:27.963329   32908 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0213 22:30:27.963338   32908 command_runner.go:130] > # reload'.
	I0213 22:30:27.963348   32908 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0213 22:30:27.963360   32908 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0213 22:30:27.963370   32908 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0213 22:30:27.963385   32908 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0213 22:30:27.963392   32908 command_runner.go:130] > [crio]
	I0213 22:30:27.963402   32908 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0213 22:30:27.963420   32908 command_runner.go:130] > # containers images, in this directory.
	I0213 22:30:27.963428   32908 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0213 22:30:27.963442   32908 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0213 22:30:27.963454   32908 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0213 22:30:27.963464   32908 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0213 22:30:27.963476   32908 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0213 22:30:27.963556   32908 command_runner.go:130] > storage_driver = "overlay"
	I0213 22:30:27.963573   32908 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0213 22:30:27.963592   32908 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0213 22:30:27.963600   32908 command_runner.go:130] > storage_option = [
	I0213 22:30:27.963744   32908 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0213 22:30:27.963895   32908 command_runner.go:130] > ]
	I0213 22:30:27.963908   32908 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0213 22:30:27.963914   32908 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0213 22:30:27.964200   32908 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0213 22:30:27.964211   32908 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0213 22:30:27.964217   32908 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0213 22:30:27.964221   32908 command_runner.go:130] > # always happen on a node reboot
	I0213 22:30:27.964530   32908 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0213 22:30:27.964548   32908 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0213 22:30:27.964558   32908 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0213 22:30:27.964582   32908 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0213 22:30:27.964959   32908 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0213 22:30:27.964978   32908 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0213 22:30:27.964989   32908 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0213 22:30:27.965262   32908 command_runner.go:130] > # internal_wipe = true
	I0213 22:30:27.965279   32908 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0213 22:30:27.965290   32908 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0213 22:30:27.965299   32908 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0213 22:30:27.965568   32908 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0213 22:30:27.965579   32908 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0213 22:30:27.965583   32908 command_runner.go:130] > [crio.api]
	I0213 22:30:27.965591   32908 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0213 22:30:27.965979   32908 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0213 22:30:27.965993   32908 command_runner.go:130] > # IP address on which the stream server will listen.
	I0213 22:30:27.966320   32908 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0213 22:30:27.966331   32908 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0213 22:30:27.966338   32908 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0213 22:30:27.966635   32908 command_runner.go:130] > # stream_port = "0"
	I0213 22:30:27.966644   32908 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0213 22:30:27.967037   32908 command_runner.go:130] > # stream_enable_tls = false
	I0213 22:30:27.967055   32908 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0213 22:30:27.967278   32908 command_runner.go:130] > # stream_idle_timeout = ""
	I0213 22:30:27.967292   32908 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0213 22:30:27.967302   32908 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0213 22:30:27.967312   32908 command_runner.go:130] > # minutes.
	I0213 22:30:27.967493   32908 command_runner.go:130] > # stream_tls_cert = ""
	I0213 22:30:27.967512   32908 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0213 22:30:27.967523   32908 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0213 22:30:27.967716   32908 command_runner.go:130] > # stream_tls_key = ""
	I0213 22:30:27.967730   32908 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0213 22:30:27.967741   32908 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0213 22:30:27.967751   32908 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0213 22:30:27.967997   32908 command_runner.go:130] > # stream_tls_ca = ""
	I0213 22:30:27.968013   32908 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0213 22:30:27.968168   32908 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0213 22:30:27.968192   32908 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0213 22:30:27.968330   32908 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0213 22:30:27.968365   32908 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0213 22:30:27.968379   32908 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0213 22:30:27.968387   32908 command_runner.go:130] > [crio.runtime]
	I0213 22:30:27.968401   32908 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0213 22:30:27.968414   32908 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0213 22:30:27.968423   32908 command_runner.go:130] > # "nofile=1024:2048"
	I0213 22:30:27.968437   32908 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0213 22:30:27.968501   32908 command_runner.go:130] > # default_ulimits = [
	I0213 22:30:27.968667   32908 command_runner.go:130] > # ]
	I0213 22:30:27.968685   32908 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0213 22:30:27.969037   32908 command_runner.go:130] > # no_pivot = false
	I0213 22:30:27.969055   32908 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0213 22:30:27.969066   32908 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0213 22:30:27.969490   32908 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0213 22:30:27.969506   32908 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0213 22:30:27.969515   32908 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0213 22:30:27.969530   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0213 22:30:27.969650   32908 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0213 22:30:27.969664   32908 command_runner.go:130] > # Cgroup setting for conmon
	I0213 22:30:27.969676   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0213 22:30:27.969890   32908 command_runner.go:130] > conmon_cgroup = "pod"
	I0213 22:30:27.969905   32908 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0213 22:30:27.969914   32908 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0213 22:30:27.969928   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0213 22:30:27.969938   32908 command_runner.go:130] > conmon_env = [
	I0213 22:30:27.970038   32908 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0213 22:30:27.970081   32908 command_runner.go:130] > ]
	I0213 22:30:27.970095   32908 command_runner.go:130] > # Additional environment variables to set for all the
	I0213 22:30:27.970106   32908 command_runner.go:130] > # containers. These are overridden if set in the
	I0213 22:30:27.970119   32908 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0213 22:30:27.970256   32908 command_runner.go:130] > # default_env = [
	I0213 22:30:27.970403   32908 command_runner.go:130] > # ]
	I0213 22:30:27.970417   32908 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0213 22:30:27.970709   32908 command_runner.go:130] > # selinux = false
	I0213 22:30:27.970720   32908 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0213 22:30:27.970726   32908 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0213 22:30:27.970736   32908 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0213 22:30:27.972577   32908 command_runner.go:130] > # seccomp_profile = ""
	I0213 22:30:27.972601   32908 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0213 22:30:27.972611   32908 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0213 22:30:27.972625   32908 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0213 22:30:27.972635   32908 command_runner.go:130] > # which might increase security.
	I0213 22:30:27.972645   32908 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0213 22:30:27.972658   32908 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0213 22:30:27.972673   32908 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0213 22:30:27.972686   32908 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0213 22:30:27.972700   32908 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0213 22:30:27.972711   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:30:27.972721   32908 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0213 22:30:27.972730   32908 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0213 22:30:27.972741   32908 command_runner.go:130] > # the cgroup blockio controller.
	I0213 22:30:27.972750   32908 command_runner.go:130] > # blockio_config_file = ""
	I0213 22:30:27.972763   32908 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0213 22:30:27.972772   32908 command_runner.go:130] > # irqbalance daemon.
	I0213 22:30:27.972783   32908 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0213 22:30:27.972795   32908 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0213 22:30:27.972808   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:30:27.972817   32908 command_runner.go:130] > # rdt_config_file = ""
	I0213 22:30:27.972825   32908 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0213 22:30:27.972835   32908 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0213 22:30:27.972846   32908 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0213 22:30:27.972856   32908 command_runner.go:130] > # separate_pull_cgroup = ""
	I0213 22:30:27.972872   32908 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0213 22:30:27.972885   32908 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0213 22:30:27.972893   32908 command_runner.go:130] > # will be added.
	I0213 22:30:27.972903   32908 command_runner.go:130] > # default_capabilities = [
	I0213 22:30:27.972910   32908 command_runner.go:130] > # 	"CHOWN",
	I0213 22:30:27.972920   32908 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0213 22:30:27.972930   32908 command_runner.go:130] > # 	"FSETID",
	I0213 22:30:27.972939   32908 command_runner.go:130] > # 	"FOWNER",
	I0213 22:30:27.972949   32908 command_runner.go:130] > # 	"SETGID",
	I0213 22:30:27.972958   32908 command_runner.go:130] > # 	"SETUID",
	I0213 22:30:27.972968   32908 command_runner.go:130] > # 	"SETPCAP",
	I0213 22:30:27.972978   32908 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0213 22:30:27.972988   32908 command_runner.go:130] > # 	"KILL",
	I0213 22:30:27.972994   32908 command_runner.go:130] > # ]
	I0213 22:30:27.973007   32908 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0213 22:30:27.973020   32908 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0213 22:30:27.973030   32908 command_runner.go:130] > # default_sysctls = [
	I0213 22:30:27.973039   32908 command_runner.go:130] > # ]
	I0213 22:30:27.973055   32908 command_runner.go:130] > # List of devices on the host that a
	I0213 22:30:27.973068   32908 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0213 22:30:27.973082   32908 command_runner.go:130] > # allowed_devices = [
	I0213 22:30:27.973091   32908 command_runner.go:130] > # 	"/dev/fuse",
	I0213 22:30:27.973100   32908 command_runner.go:130] > # ]
	I0213 22:30:27.973112   32908 command_runner.go:130] > # List of additional devices. specified as
	I0213 22:30:27.973128   32908 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0213 22:30:27.973140   32908 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0213 22:30:27.973206   32908 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0213 22:30:27.973220   32908 command_runner.go:130] > # additional_devices = [
	I0213 22:30:27.973226   32908 command_runner.go:130] > # ]
	I0213 22:30:27.973234   32908 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0213 22:30:27.973244   32908 command_runner.go:130] > # cdi_spec_dirs = [
	I0213 22:30:27.973253   32908 command_runner.go:130] > # 	"/etc/cdi",
	I0213 22:30:27.973263   32908 command_runner.go:130] > # 	"/var/run/cdi",
	I0213 22:30:27.973271   32908 command_runner.go:130] > # ]
	I0213 22:30:27.973284   32908 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0213 22:30:27.973296   32908 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0213 22:30:27.973310   32908 command_runner.go:130] > # Defaults to false.
	I0213 22:30:27.973323   32908 command_runner.go:130] > # device_ownership_from_security_context = false
	I0213 22:30:27.973336   32908 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0213 22:30:27.973349   32908 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0213 22:30:27.973359   32908 command_runner.go:130] > # hooks_dir = [
	I0213 22:30:27.973370   32908 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0213 22:30:27.973379   32908 command_runner.go:130] > # ]
	I0213 22:30:27.973391   32908 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0213 22:30:27.973402   32908 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0213 22:30:27.973412   32908 command_runner.go:130] > # its default mounts from the following two files:
	I0213 22:30:27.973420   32908 command_runner.go:130] > #
	I0213 22:30:27.973432   32908 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0213 22:30:27.973446   32908 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0213 22:30:27.973459   32908 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0213 22:30:27.973467   32908 command_runner.go:130] > #
	I0213 22:30:27.973481   32908 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0213 22:30:27.973494   32908 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0213 22:30:27.973508   32908 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0213 22:30:27.973522   32908 command_runner.go:130] > #      only add mounts it finds in this file.
	I0213 22:30:27.973531   32908 command_runner.go:130] > #
	I0213 22:30:27.973542   32908 command_runner.go:130] > # default_mounts_file = ""
	I0213 22:30:27.973555   32908 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0213 22:30:27.973573   32908 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0213 22:30:27.973588   32908 command_runner.go:130] > pids_limit = 1024
	I0213 22:30:27.973600   32908 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0213 22:30:27.973612   32908 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0213 22:30:27.973623   32908 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0213 22:30:27.973638   32908 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0213 22:30:27.973647   32908 command_runner.go:130] > # log_size_max = -1
	I0213 22:30:27.973660   32908 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0213 22:30:27.973670   32908 command_runner.go:130] > # log_to_journald = false
	I0213 22:30:27.973682   32908 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0213 22:30:27.973693   32908 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0213 22:30:27.973704   32908 command_runner.go:130] > # Path to directory for container attach sockets.
	I0213 22:30:27.973715   32908 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0213 22:30:27.973727   32908 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0213 22:30:27.973740   32908 command_runner.go:130] > # bind_mount_prefix = ""
	I0213 22:30:27.973751   32908 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0213 22:30:27.973760   32908 command_runner.go:130] > # read_only = false
	I0213 22:30:27.973773   32908 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0213 22:30:27.973786   32908 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0213 22:30:27.973795   32908 command_runner.go:130] > # live configuration reload.
	I0213 22:30:27.973801   32908 command_runner.go:130] > # log_level = "info"
	I0213 22:30:27.973810   32908 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0213 22:30:27.973817   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:30:27.973821   32908 command_runner.go:130] > # log_filter = ""
	I0213 22:30:27.973829   32908 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0213 22:30:27.973838   32908 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0213 22:30:27.973844   32908 command_runner.go:130] > # separated by comma.
	I0213 22:30:27.973848   32908 command_runner.go:130] > # uid_mappings = ""
	I0213 22:30:27.973856   32908 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0213 22:30:27.973863   32908 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0213 22:30:27.973884   32908 command_runner.go:130] > # separated by comma.
	I0213 22:30:27.973891   32908 command_runner.go:130] > # gid_mappings = ""
	I0213 22:30:27.973908   32908 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0213 22:30:27.973918   32908 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0213 22:30:27.973927   32908 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0213 22:30:27.973934   32908 command_runner.go:130] > # minimum_mappable_uid = -1
	I0213 22:30:27.973940   32908 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0213 22:30:27.973948   32908 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0213 22:30:27.973956   32908 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0213 22:30:27.973962   32908 command_runner.go:130] > # minimum_mappable_gid = -1
	I0213 22:30:27.973974   32908 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0213 22:30:27.973986   32908 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0213 22:30:27.973998   32908 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0213 22:30:27.974006   32908 command_runner.go:130] > # ctr_stop_timeout = 30
	I0213 22:30:27.974018   32908 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0213 22:30:27.974031   32908 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0213 22:30:27.974043   32908 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0213 22:30:27.974053   32908 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0213 22:30:27.974060   32908 command_runner.go:130] > drop_infra_ctr = false
	I0213 22:30:27.974066   32908 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0213 22:30:27.974078   32908 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0213 22:30:27.974093   32908 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0213 22:30:27.974103   32908 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0213 22:30:27.974114   32908 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0213 22:30:27.974127   32908 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0213 22:30:27.974138   32908 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0213 22:30:27.974153   32908 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0213 22:30:27.974163   32908 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0213 22:30:27.974171   32908 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0213 22:30:27.974180   32908 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0213 22:30:27.974189   32908 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0213 22:30:27.974200   32908 command_runner.go:130] > # default_runtime = "runc"
	I0213 22:30:27.974209   32908 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0213 22:30:27.974225   32908 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0213 22:30:27.974242   32908 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0213 22:30:27.974253   32908 command_runner.go:130] > # creation as a file is not desired either.
	I0213 22:30:27.974265   32908 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0213 22:30:27.974274   32908 command_runner.go:130] > # the hostname is being managed dynamically.
	I0213 22:30:27.974289   32908 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0213 22:30:27.974298   32908 command_runner.go:130] > # ]
	I0213 22:30:27.974309   32908 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0213 22:30:27.974323   32908 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0213 22:30:27.974337   32908 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0213 22:30:27.974350   32908 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0213 22:30:27.974359   32908 command_runner.go:130] > #
	I0213 22:30:27.974369   32908 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0213 22:30:27.974377   32908 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0213 22:30:27.974383   32908 command_runner.go:130] > #  runtime_type = "oci"
	I0213 22:30:27.974395   32908 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0213 22:30:27.974407   32908 command_runner.go:130] > #  privileged_without_host_devices = false
	I0213 22:30:27.974418   32908 command_runner.go:130] > #  allowed_annotations = []
	I0213 22:30:27.974427   32908 command_runner.go:130] > # Where:
	I0213 22:30:27.974439   32908 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0213 22:30:27.974452   32908 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0213 22:30:27.974465   32908 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0213 22:30:27.974476   32908 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0213 22:30:27.974489   32908 command_runner.go:130] > #   in $PATH.
	I0213 22:30:27.974503   32908 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0213 22:30:27.974515   32908 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0213 22:30:27.974529   32908 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0213 22:30:27.974541   32908 command_runner.go:130] > #   state.
	I0213 22:30:27.974555   32908 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0213 22:30:27.974568   32908 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0213 22:30:27.974585   32908 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0213 22:30:27.974594   32908 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0213 22:30:27.974607   32908 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0213 22:30:27.974622   32908 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0213 22:30:27.974635   32908 command_runner.go:130] > #   The currently recognized values are:
	I0213 22:30:27.974649   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0213 22:30:27.974664   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0213 22:30:27.974676   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0213 22:30:27.974688   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0213 22:30:27.974696   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0213 22:30:27.974710   32908 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0213 22:30:27.974727   32908 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0213 22:30:27.974741   32908 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0213 22:30:27.974753   32908 command_runner.go:130] > #   should be moved to the container's cgroup
	I0213 22:30:27.974764   32908 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0213 22:30:27.974775   32908 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0213 22:30:27.974785   32908 command_runner.go:130] > runtime_type = "oci"
	I0213 22:30:27.974791   32908 command_runner.go:130] > runtime_root = "/run/runc"
	I0213 22:30:27.974795   32908 command_runner.go:130] > runtime_config_path = ""
	I0213 22:30:27.974801   32908 command_runner.go:130] > monitor_path = ""
	I0213 22:30:27.974809   32908 command_runner.go:130] > monitor_cgroup = ""
	I0213 22:30:27.974820   32908 command_runner.go:130] > monitor_exec_cgroup = ""
	I0213 22:30:27.974831   32908 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0213 22:30:27.974841   32908 command_runner.go:130] > # running containers
	I0213 22:30:27.974852   32908 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0213 22:30:27.974865   32908 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0213 22:30:27.974941   32908 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0213 22:30:27.974959   32908 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0213 22:30:27.974968   32908 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0213 22:30:27.974985   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0213 22:30:27.974996   32908 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0213 22:30:27.975007   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0213 22:30:27.975018   32908 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0213 22:30:27.975029   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0213 22:30:27.975039   32908 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0213 22:30:27.975053   32908 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0213 22:30:27.975067   32908 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0213 22:30:27.975083   32908 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0213 22:30:27.975098   32908 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0213 22:30:27.975111   32908 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0213 22:30:27.975127   32908 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0213 22:30:27.975139   32908 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0213 22:30:27.975152   32908 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0213 22:30:27.975167   32908 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0213 22:30:27.975177   32908 command_runner.go:130] > # Example:
	I0213 22:30:27.975189   32908 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0213 22:30:27.975200   32908 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0213 22:30:27.975214   32908 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0213 22:30:27.975226   32908 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0213 22:30:27.975233   32908 command_runner.go:130] > # cpuset = 0
	I0213 22:30:27.975238   32908 command_runner.go:130] > # cpushares = "0-1"
	I0213 22:30:27.975247   32908 command_runner.go:130] > # Where:
	I0213 22:30:27.975258   32908 command_runner.go:130] > # The workload name is workload-type.
	I0213 22:30:27.975273   32908 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0213 22:30:27.975285   32908 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0213 22:30:27.975298   32908 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0213 22:30:27.975316   32908 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0213 22:30:27.975325   32908 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0213 22:30:27.975332   32908 command_runner.go:130] > # 
	I0213 22:30:27.975347   32908 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0213 22:30:27.975356   32908 command_runner.go:130] > #
	I0213 22:30:27.975369   32908 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0213 22:30:27.975382   32908 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0213 22:30:27.975396   32908 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0213 22:30:27.975406   32908 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0213 22:30:27.975421   32908 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0213 22:30:27.975431   32908 command_runner.go:130] > [crio.image]
	I0213 22:30:27.975442   32908 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0213 22:30:27.975453   32908 command_runner.go:130] > # default_transport = "docker://"
	I0213 22:30:27.975466   32908 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0213 22:30:27.975480   32908 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0213 22:30:27.975490   32908 command_runner.go:130] > # global_auth_file = ""
	I0213 22:30:27.975499   32908 command_runner.go:130] > # The image used to instantiate infra containers.
	I0213 22:30:27.975508   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:30:27.975519   32908 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0213 22:30:27.975533   32908 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0213 22:30:27.975547   32908 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0213 22:30:27.975559   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:30:27.975569   32908 command_runner.go:130] > # pause_image_auth_file = ""
	I0213 22:30:27.975590   32908 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0213 22:30:27.975599   32908 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0213 22:30:27.975610   32908 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0213 22:30:27.975623   32908 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0213 22:30:27.975637   32908 command_runner.go:130] > # pause_command = "/pause"
	I0213 22:30:27.975651   32908 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0213 22:30:27.975665   32908 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0213 22:30:27.975675   32908 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0213 22:30:27.975683   32908 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0213 22:30:27.975688   32908 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0213 22:30:27.975693   32908 command_runner.go:130] > # signature_policy = ""
	I0213 22:30:27.975703   32908 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0213 22:30:27.975714   32908 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0213 22:30:27.975721   32908 command_runner.go:130] > # changing them here.
	I0213 22:30:27.975728   32908 command_runner.go:130] > # insecure_registries = [
	I0213 22:30:27.975733   32908 command_runner.go:130] > # ]
	I0213 22:30:27.975744   32908 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0213 22:30:27.975752   32908 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0213 22:30:27.975763   32908 command_runner.go:130] > # image_volumes = "mkdir"
	I0213 22:30:27.975769   32908 command_runner.go:130] > # Temporary directory to use for storing big files
	I0213 22:30:27.975777   32908 command_runner.go:130] > # big_files_temporary_dir = ""
	I0213 22:30:27.975787   32908 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0213 22:30:27.975802   32908 command_runner.go:130] > # CNI plugins.
	I0213 22:30:27.975809   32908 command_runner.go:130] > [crio.network]
	I0213 22:30:27.975823   32908 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0213 22:30:27.975834   32908 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0213 22:30:27.975844   32908 command_runner.go:130] > # cni_default_network = ""
	I0213 22:30:27.975856   32908 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0213 22:30:27.975867   32908 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0213 22:30:27.975874   32908 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0213 22:30:27.975884   32908 command_runner.go:130] > # plugin_dirs = [
	I0213 22:30:27.975894   32908 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0213 22:30:27.975903   32908 command_runner.go:130] > # ]
	I0213 22:30:27.975915   32908 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0213 22:30:27.975924   32908 command_runner.go:130] > [crio.metrics]
	I0213 22:30:27.975936   32908 command_runner.go:130] > # Globally enable or disable metrics support.
	I0213 22:30:27.975946   32908 command_runner.go:130] > enable_metrics = true
	I0213 22:30:27.975956   32908 command_runner.go:130] > # Specify enabled metrics collectors.
	I0213 22:30:27.975964   32908 command_runner.go:130] > # Per default all metrics are enabled.
	I0213 22:30:27.975978   32908 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0213 22:30:27.975995   32908 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0213 22:30:27.976008   32908 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0213 22:30:27.976018   32908 command_runner.go:130] > # metrics_collectors = [
	I0213 22:30:27.976028   32908 command_runner.go:130] > # 	"operations",
	I0213 22:30:27.976039   32908 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0213 22:30:27.976050   32908 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0213 22:30:27.976062   32908 command_runner.go:130] > # 	"operations_errors",
	I0213 22:30:27.976072   32908 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0213 22:30:27.976082   32908 command_runner.go:130] > # 	"image_pulls_by_name",
	I0213 22:30:27.976093   32908 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0213 22:30:27.976104   32908 command_runner.go:130] > # 	"image_pulls_failures",
	I0213 22:30:27.976114   32908 command_runner.go:130] > # 	"image_pulls_successes",
	I0213 22:30:27.976125   32908 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0213 22:30:27.976135   32908 command_runner.go:130] > # 	"image_layer_reuse",
	I0213 22:30:27.976145   32908 command_runner.go:130] > # 	"containers_oom_total",
	I0213 22:30:27.976152   32908 command_runner.go:130] > # 	"containers_oom",
	I0213 22:30:27.976158   32908 command_runner.go:130] > # 	"processes_defunct",
	I0213 22:30:27.976168   32908 command_runner.go:130] > # 	"operations_total",
	I0213 22:30:27.976183   32908 command_runner.go:130] > # 	"operations_latency_seconds",
	I0213 22:30:27.976195   32908 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0213 22:30:27.976205   32908 command_runner.go:130] > # 	"operations_errors_total",
	I0213 22:30:27.976215   32908 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0213 22:30:27.976226   32908 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0213 22:30:27.976237   32908 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0213 22:30:27.976245   32908 command_runner.go:130] > # 	"image_pulls_success_total",
	I0213 22:30:27.976252   32908 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0213 22:30:27.976259   32908 command_runner.go:130] > # 	"containers_oom_count_total",
	I0213 22:30:27.976268   32908 command_runner.go:130] > # ]
	I0213 22:30:27.976280   32908 command_runner.go:130] > # The port on which the metrics server will listen.
	I0213 22:30:27.976291   32908 command_runner.go:130] > # metrics_port = 9090
	I0213 22:30:27.976303   32908 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0213 22:30:27.976313   32908 command_runner.go:130] > # metrics_socket = ""
	I0213 22:30:27.976325   32908 command_runner.go:130] > # The certificate for the secure metrics server.
	I0213 22:30:27.976338   32908 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0213 22:30:27.976349   32908 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0213 22:30:27.976358   32908 command_runner.go:130] > # certificate on any modification event.
	I0213 22:30:27.976372   32908 command_runner.go:130] > # metrics_cert = ""
	I0213 22:30:27.976385   32908 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0213 22:30:27.976397   32908 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0213 22:30:27.976407   32908 command_runner.go:130] > # metrics_key = ""
	I0213 22:30:27.976419   32908 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0213 22:30:27.976429   32908 command_runner.go:130] > [crio.tracing]
	I0213 22:30:27.976441   32908 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0213 22:30:27.976450   32908 command_runner.go:130] > # enable_tracing = false
	I0213 22:30:27.976462   32908 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0213 22:30:27.976473   32908 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0213 22:30:27.976485   32908 command_runner.go:130] > # Number of samples to collect per million spans.
	I0213 22:30:27.976497   32908 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0213 22:30:27.976511   32908 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0213 22:30:27.976520   32908 command_runner.go:130] > [crio.stats]
	I0213 22:30:27.976532   32908 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0213 22:30:27.976544   32908 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0213 22:30:27.976552   32908 command_runner.go:130] > # stats_collection_period = 0
	I0213 22:30:27.976590   32908 command_runner.go:130] ! time="2024-02-13 22:30:27.910492747Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0213 22:30:27.976614   32908 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0213 22:30:27.976705   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:30:27.976718   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:30:27.976740   32908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 22:30:27.976767   32908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-413653 NodeName:multinode-413653 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 22:30:27.976916   32908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-413653"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.81
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 22:30:27.977018   32908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-413653 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 22:30:27.977078   32908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 22:30:27.986614   32908 command_runner.go:130] > kubeadm
	I0213 22:30:27.986634   32908 command_runner.go:130] > kubectl
	I0213 22:30:27.986641   32908 command_runner.go:130] > kubelet
	I0213 22:30:27.986733   32908 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 22:30:27.986806   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 22:30:27.995979   32908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0213 22:30:28.011380   32908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 22:30:28.027208   32908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0213 22:30:28.043498   32908 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0213 22:30:28.047312   32908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 22:30:28.058755   32908 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653 for IP: 192.168.39.81
	I0213 22:30:28.058797   32908 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:30:28.058930   32908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 22:30:28.058971   32908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 22:30:28.059041   32908 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key
	I0213 22:30:28.059102   32908 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/apiserver.key.42d444b6
	I0213 22:30:28.059138   32908 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/proxy-client.key
	I0213 22:30:28.059148   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0213 22:30:28.059159   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0213 22:30:28.059171   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0213 22:30:28.059183   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0213 22:30:28.059199   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 22:30:28.059211   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 22:30:28.059223   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 22:30:28.059234   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 22:30:28.059285   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 22:30:28.059310   32908 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 22:30:28.059320   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 22:30:28.059348   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 22:30:28.059408   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 22:30:28.059438   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 22:30:28.059480   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:30:28.059505   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:30:28.059518   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem -> /usr/share/ca-certificates/16200.pem
	I0213 22:30:28.059529   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /usr/share/ca-certificates/162002.pem
	I0213 22:30:28.060143   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 22:30:28.083504   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 22:30:28.112163   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 22:30:28.134289   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 22:30:28.157112   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 22:30:28.180568   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 22:30:28.204495   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 22:30:28.228139   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 22:30:28.251792   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 22:30:28.278301   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 22:30:28.303634   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 22:30:28.325950   32908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 22:30:28.342453   32908 ssh_runner.go:195] Run: openssl version
	I0213 22:30:28.347565   32908 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0213 22:30:28.347820   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 22:30:28.358751   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:30:28.363400   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:30:28.363469   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:30:28.363519   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:30:28.368806   32908 command_runner.go:130] > b5213941
	I0213 22:30:28.368884   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 22:30:28.381044   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 22:30:28.394526   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 22:30:28.399503   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:30:28.399694   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:30:28.399768   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 22:30:28.405683   32908 command_runner.go:130] > 51391683
	I0213 22:30:28.405777   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 22:30:28.419541   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 22:30:28.432014   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 22:30:28.436670   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:30:28.436796   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:30:28.436854   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 22:30:28.442502   32908 command_runner.go:130] > 3ec20f2e
	I0213 22:30:28.442553   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 22:30:28.454361   32908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 22:30:28.459163   32908 command_runner.go:130] > ca.crt
	I0213 22:30:28.459184   32908 command_runner.go:130] > ca.key
	I0213 22:30:28.459191   32908 command_runner.go:130] > healthcheck-client.crt
	I0213 22:30:28.459198   32908 command_runner.go:130] > healthcheck-client.key
	I0213 22:30:28.459208   32908 command_runner.go:130] > peer.crt
	I0213 22:30:28.459213   32908 command_runner.go:130] > peer.key
	I0213 22:30:28.459219   32908 command_runner.go:130] > server.crt
	I0213 22:30:28.459225   32908 command_runner.go:130] > server.key
	I0213 22:30:28.459287   32908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 22:30:28.465383   32908 command_runner.go:130] > Certificate will not expire
	I0213 22:30:28.465683   32908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 22:30:28.472023   32908 command_runner.go:130] > Certificate will not expire
	I0213 22:30:28.472365   32908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 22:30:28.478722   32908 command_runner.go:130] > Certificate will not expire
	I0213 22:30:28.478780   32908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 22:30:28.484373   32908 command_runner.go:130] > Certificate will not expire
	I0213 22:30:28.484529   32908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 22:30:28.490382   32908 command_runner.go:130] > Certificate will not expire
	I0213 22:30:28.490448   32908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 22:30:28.497417   32908 command_runner.go:130] > Certificate will not expire
	I0213 22:30:28.497697   32908 kubeadm.go:404] StartCluster: {Name:multinode-413653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:30:28.497851   32908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 22:30:28.497947   32908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 22:30:28.538040   32908 cri.go:89] found id: ""
	I0213 22:30:28.538115   32908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 22:30:28.549847   32908 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0213 22:30:28.549878   32908 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0213 22:30:28.549887   32908 command_runner.go:130] > /var/lib/minikube/etcd:
	I0213 22:30:28.549893   32908 command_runner.go:130] > member
	I0213 22:30:28.550155   32908 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 22:30:28.550174   32908 kubeadm.go:636] restartCluster start
	I0213 22:30:28.550220   32908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 22:30:28.562161   32908 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:28.562897   32908 kubeconfig.go:92] found "multinode-413653" server: "https://192.168.39.81:8443"
	I0213 22:30:28.563502   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:30:28.563864   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:30:28.564712   32908 cert_rotation.go:137] Starting client certificate rotation controller
	I0213 22:30:28.564843   32908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 22:30:28.574625   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:28.574680   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:28.586805   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:29.075054   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:29.075171   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:29.088856   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:29.575018   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:29.575102   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:29.587490   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:30.075623   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:30.075722   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:30.089084   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:30.575623   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:30.575732   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:30.588161   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:31.074697   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:31.074807   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:31.089309   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:31.575148   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:31.575225   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:31.587821   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:32.075546   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:32.075647   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:32.089107   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:32.575723   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:32.575837   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:32.590033   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:33.075659   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:33.075755   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:33.088486   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:33.575056   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:33.575170   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:33.588120   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:34.074682   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:34.074783   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:34.087264   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:34.574782   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:34.574857   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:34.588083   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:35.075063   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:35.075142   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:35.087606   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:35.575158   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:35.575254   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:35.588290   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:36.074864   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:36.074976   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:36.087769   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:36.575413   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:36.575520   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:36.588559   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:37.075386   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:37.075478   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:37.089429   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:37.575018   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:37.575117   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:37.587505   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:38.075045   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:38.075118   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:38.088002   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:38.574822   32908 api_server.go:166] Checking apiserver status ...
	I0213 22:30:38.574910   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 22:30:38.587768   32908 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 22:30:38.587810   32908 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 22:30:38.587844   32908 kubeadm.go:1135] stopping kube-system containers ...
	I0213 22:30:38.587856   32908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 22:30:38.587914   32908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 22:30:38.626538   32908 cri.go:89] found id: ""
	I0213 22:30:38.626601   32908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 22:30:38.643446   32908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 22:30:38.652670   32908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0213 22:30:38.653151   32908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0213 22:30:38.653509   32908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0213 22:30:38.654106   32908 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 22:30:38.654528   32908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 22:30:38.654582   32908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 22:30:38.664420   32908 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 22:30:38.664451   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 22:30:38.786736   32908 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 22:30:38.787136   32908 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0213 22:30:38.787572   32908 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0213 22:30:38.788085   32908 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 22:30:38.788738   32908 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0213 22:30:38.789223   32908 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0213 22:30:38.790370   32908 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0213 22:30:38.790804   32908 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0213 22:30:38.791309   32908 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0213 22:30:38.791717   32908 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 22:30:38.792260   32908 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 22:30:38.792969   32908 command_runner.go:130] > [certs] Using the existing "sa" key
	I0213 22:30:38.794458   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 22:30:38.846100   32908 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 22:30:39.063454   32908 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 22:30:39.564452   32908 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 22:30:39.901809   32908 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 22:30:39.949458   32908 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 22:30:39.952309   32908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.157827189s)
	I0213 22:30:39.952335   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 22:30:40.136962   32908 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 22:30:40.136993   32908 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 22:30:40.136999   32908 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0213 22:30:40.137299   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 22:30:40.210109   32908 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 22:30:40.210131   32908 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 22:30:40.210142   32908 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 22:30:40.210150   32908 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 22:30:40.210261   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 22:30:40.291983   32908 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 22:30:40.292263   32908 api_server.go:52] waiting for apiserver process to appear ...
	I0213 22:30:40.292352   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:30:40.793050   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:30:41.292459   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:30:41.793306   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:30:42.292629   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:30:42.793366   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:30:42.818977   32908 command_runner.go:130] > 1066
	I0213 22:30:42.819021   32908 api_server.go:72] duration metric: took 2.526759908s to wait for apiserver process to appear ...
	I0213 22:30:42.819041   32908 api_server.go:88] waiting for apiserver healthz status ...
	I0213 22:30:42.819061   32908 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:30:46.356408   32908 api_server.go:279] https://192.168.39.81:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 22:30:46.356444   32908 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 22:30:46.356459   32908 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:30:46.456211   32908 api_server.go:279] https://192.168.39.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 22:30:46.456252   32908 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 22:30:46.819866   32908 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:30:46.826902   32908 api_server.go:279] https://192.168.39.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 22:30:46.826930   32908 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 22:30:47.319468   32908 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:30:47.330721   32908 api_server.go:279] https://192.168.39.81:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 22:30:47.330750   32908 api_server.go:103] status: https://192.168.39.81:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 22:30:47.819307   32908 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:30:47.824836   32908 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0213 22:30:47.824919   32908 round_trippers.go:463] GET https://192.168.39.81:8443/version
	I0213 22:30:47.824937   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:47.824951   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:47.824965   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:47.832649   32908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0213 22:30:47.832686   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:47.832694   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:47.832700   32908 round_trippers.go:580]     Content-Length: 264
	I0213 22:30:47.832705   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:47 GMT
	I0213 22:30:47.832710   32908 round_trippers.go:580]     Audit-Id: 40c5e511-de62-4823-a7ef-de3cb4aec254
	I0213 22:30:47.832715   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:47.832720   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:47.832726   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:47.832754   32908 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0213 22:30:47.832832   32908 api_server.go:141] control plane version: v1.28.4
	I0213 22:30:47.832857   32908 api_server.go:131] duration metric: took 5.013809774s to wait for apiserver health ...
	I0213 22:30:47.832864   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:30:47.832869   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:30:47.834563   32908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0213 22:30:47.835808   32908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0213 22:30:47.859005   32908 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0213 22:30:47.859044   32908 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0213 22:30:47.859054   32908 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0213 22:30:47.859065   32908 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0213 22:30:47.859073   32908 command_runner.go:130] > Access: 2024-02-13 22:30:12.772425470 +0000
	I0213 22:30:47.859081   32908 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0213 22:30:47.859090   32908 command_runner.go:130] > Change: 2024-02-13 22:30:10.912425470 +0000
	I0213 22:30:47.859097   32908 command_runner.go:130] >  Birth: -
	I0213 22:30:47.859441   32908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0213 22:30:47.859458   32908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0213 22:30:47.902942   32908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0213 22:30:49.260294   32908 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0213 22:30:49.260323   32908 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0213 22:30:49.260335   32908 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0213 22:30:49.260342   32908 command_runner.go:130] > daemonset.apps/kindnet configured
	I0213 22:30:49.260430   32908 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.35744202s)
	I0213 22:30:49.260456   32908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 22:30:49.260533   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:30:49.260542   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.260550   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.260569   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.264844   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:30:49.264872   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.264879   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.264885   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.264891   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.264904   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.264916   32908 round_trippers.go:580]     Audit-Id: 68cd9cb7-046c-4633-8b35-b99946410aeb
	I0213 22:30:49.264933   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.267322   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83099 chars]
	I0213 22:30:49.271368   32908 system_pods.go:59] 12 kube-system pods found
	I0213 22:30:49.271410   32908 system_pods.go:61] "coredns-5dd5756b68-lq7xh" [2543314d-46b0-490c-b0e1-74f4777913f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 22:30:49.271422   32908 system_pods.go:61] "etcd-multinode-413653" [6adf5771-f03b-47ca-ad97-384b664fb8ab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 22:30:49.271432   32908 system_pods.go:61] "kindnet-4m5lx" [9c27db1a-aefc-4f82-921d-3f412fbeed91] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0213 22:30:49.271443   32908 system_pods.go:61] "kindnet-p2bqz" [c0ca435d-2301-48c0-a56b-2f147217fb91] Running
	I0213 22:30:49.271457   32908 system_pods.go:61] "kindnet-shxmz" [1684b3fd-4115-4ab7-88d4-dc1c95680525] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0213 22:30:49.271471   32908 system_pods.go:61] "kube-apiserver-multinode-413653" [1540a1dc-5f90-45b2-8d9e-0f0a1581328a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 22:30:49.271500   32908 system_pods.go:61] "kube-controller-manager-multinode-413653" [1d3432c0-f2cd-4371-9599-9a119dc1a8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 22:30:49.271512   32908 system_pods.go:61] "kube-proxy-26ww9" [2b00e8eb-8829-460d-a162-7fe8c783c260] Running
	I0213 22:30:49.271522   32908 system_pods.go:61] "kube-proxy-h5bvp" [d7a12109-66cd-41a9-b7e7-4e53a27a4ca7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 22:30:49.271532   32908 system_pods.go:61] "kube-proxy-k4ggx" [b9fa1c43-43a7-4737-8b10-e5327e355e9a] Running
	I0213 22:30:49.271543   32908 system_pods.go:61] "kube-scheduler-multinode-413653" [08710d51-793f-4606-9075-b5ab7331893e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 22:30:49.271555   32908 system_pods.go:61] "storage-provisioner" [aecede5e-5ae2-4239-b920-ab1af32c4d38] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 22:30:49.271565   32908 system_pods.go:74] duration metric: took 11.101636ms to wait for pod list to return data ...
	I0213 22:30:49.271578   32908 node_conditions.go:102] verifying NodePressure condition ...
	I0213 22:30:49.271640   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes
	I0213 22:30:49.271650   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.271661   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.271675   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.276484   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:30:49.276503   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.276509   32908 round_trippers.go:580]     Audit-Id: 39a3114b-c170-4a11-a241-632d498fa179
	I0213 22:30:49.276515   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.276520   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.276525   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.276530   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.276543   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.276860   32908 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"795"},"items":[{"metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16354 chars]
	I0213 22:30:49.277927   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:30:49.277959   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:30:49.277978   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:30:49.277988   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:30:49.277994   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:30:49.278003   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:30:49.278010   32908 node_conditions.go:105] duration metric: took 6.426752ms to run NodePressure ...
	I0213 22:30:49.278033   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 22:30:49.567116   32908 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0213 22:30:49.567147   32908 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0213 22:30:49.567180   32908 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 22:30:49.567292   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0213 22:30:49.567305   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.567315   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.567324   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.571814   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:30:49.571841   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.571849   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.571856   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.571861   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.571871   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.571877   32908 round_trippers.go:580]     Audit-Id: c97ec236-c5d7-40e9-b6e2-a9c8ac677b23
	I0213 22:30:49.571882   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.572439   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"802"},"items":[{"metadata":{"name":"etcd-multinode-413653","namespace":"kube-system","uid":"6adf5771-f03b-47ca-ad97-384b664fb8ab","resourceVersion":"774","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.81:2379","kubernetes.io/config.hash":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.mirror":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.seen":"2024-02-13T22:20:28.219611587Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I0213 22:30:49.573663   32908 kubeadm.go:787] kubelet initialised
	I0213 22:30:49.573682   32908 kubeadm.go:788] duration metric: took 6.4927ms waiting for restarted kubelet to initialise ...
	I0213 22:30:49.573688   32908 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:30:49.573770   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:30:49.573780   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.573787   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.573793   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.577694   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:49.577710   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.577716   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.577721   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.577726   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.577731   32908 round_trippers.go:580]     Audit-Id: b397e37e-d11b-4503-8a0c-2062db0db5f9
	I0213 22:30:49.577736   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.577757   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.580060   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"802"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82598 chars]
	I0213 22:30:49.582616   32908 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:49.582709   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:49.582719   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.582730   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.582740   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.585052   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.585073   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.585081   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.585089   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.585096   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.585107   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.585118   32908 round_trippers.go:580]     Audit-Id: 79f654a2-4b04-4855-b5e9-19aa6b9ed0ab
	I0213 22:30:49.585128   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.585298   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:49.585730   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:49.585744   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.585753   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.585766   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.587783   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.587804   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.587814   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.587822   32908 round_trippers.go:580]     Audit-Id: 851d3b62-c050-4735-9f33-d0eb3a353c9c
	I0213 22:30:49.587829   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.587836   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.587847   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.587856   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.588178   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:49.588569   32908 pod_ready.go:97] node "multinode-413653" hosting pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.588594   32908 pod_ready.go:81] duration metric: took 5.956266ms waiting for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	E0213 22:30:49.588610   32908 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-413653" hosting pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.588616   32908 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:49.588694   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-413653
	I0213 22:30:49.588707   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.588717   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.588726   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.590706   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:49.590726   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.590735   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.590744   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.590751   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.590758   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.590766   32908 round_trippers.go:580]     Audit-Id: 10b936e6-550a-4ec4-96b7-bd545fd4aba6
	I0213 22:30:49.590772   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.590939   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-413653","namespace":"kube-system","uid":"6adf5771-f03b-47ca-ad97-384b664fb8ab","resourceVersion":"774","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.81:2379","kubernetes.io/config.hash":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.mirror":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.seen":"2024-02-13T22:20:28.219611587Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I0213 22:30:49.591376   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:49.591393   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.591404   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.591418   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.593054   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:49.593072   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.593082   32908 round_trippers.go:580]     Audit-Id: 652c3b40-afdd-4ba9-994a-812c74f64e8d
	I0213 22:30:49.593097   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.593106   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.593123   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.593134   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.593142   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.593320   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:49.593611   32908 pod_ready.go:97] node "multinode-413653" hosting pod "etcd-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.593635   32908 pod_ready.go:81] duration metric: took 5.009405ms waiting for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	E0213 22:30:49.593646   32908 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-413653" hosting pod "etcd-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.593664   32908 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:49.593726   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:49.593736   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.593746   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.593761   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.595809   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.595828   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.595837   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.595845   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.595853   32908 round_trippers.go:580]     Audit-Id: 219b13d3-c40c-4e26-b0bd-16da7db8c2d7
	I0213 22:30:49.595862   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.595882   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.595890   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.596354   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:49.596705   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:49.596718   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.596728   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.596736   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.599461   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.599482   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.599491   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.599499   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.599508   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.599516   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.599524   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.599536   32908 round_trippers.go:580]     Audit-Id: 712e9cd6-1e1b-4b29-88a8-cf609c0ee866
	I0213 22:30:49.599669   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:49.600047   32908 pod_ready.go:97] node "multinode-413653" hosting pod "kube-apiserver-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.600072   32908 pod_ready.go:81] duration metric: took 6.397976ms waiting for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	E0213 22:30:49.600089   32908 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-413653" hosting pod "kube-apiserver-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.600098   32908 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:49.600165   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-413653
	I0213 22:30:49.600178   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.600188   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.600201   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.602401   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.602416   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.602423   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.602429   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.602434   32908 round_trippers.go:580]     Audit-Id: 0d165726-c017-4f0e-ba59-ee052e88ddc4
	I0213 22:30:49.602439   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.602447   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.602463   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.602730   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-413653","namespace":"kube-system","uid":"1d3432c0-f2cd-4371-9599-9a119dc1a8ab","resourceVersion":"772","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.mirror":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.seen":"2024-02-13T22:20:28.219615864Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I0213 22:30:49.661463   32908 request.go:629] Waited for 58.250064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:49.661590   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:49.661606   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.661619   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.661633   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.664637   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.664658   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.664665   32908 round_trippers.go:580]     Audit-Id: 2fff260b-088e-41aa-815b-42482c09028e
	I0213 22:30:49.664671   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.664677   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.664685   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.664694   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.664703   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.664914   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:49.665333   32908 pod_ready.go:97] node "multinode-413653" hosting pod "kube-controller-manager-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.665356   32908 pod_ready.go:81] duration metric: took 65.2482ms waiting for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	E0213 22:30:49.665377   32908 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-413653" hosting pod "kube-controller-manager-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:49.665392   32908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:49.861588   32908 request.go:629] Waited for 196.134484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:30:49.861666   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:30:49.861673   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:49.861680   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:49.861686   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:49.864570   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:49.864591   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:49.864602   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:49 GMT
	I0213 22:30:49.864611   32908 round_trippers.go:580]     Audit-Id: 5d24939c-5408-40e3-9133-3eac46ffcaf4
	I0213 22:30:49.864629   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:49.864637   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:49.864643   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:49.864648   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:49.864820   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-26ww9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b00e8eb-8829-460d-a162-7fe8c783c260","resourceVersion":"480","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0213 22:30:50.060562   32908 request.go:629] Waited for 195.31804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:30:50.060625   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:30:50.060631   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:50.060638   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:50.060644   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:50.063478   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:50.063500   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:50.063510   32908 round_trippers.go:580]     Audit-Id: dfd25847-ba27-411c-aa1b-7065cb970e0d
	I0213 22:30:50.063518   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:50.063525   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:50.063532   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:50.063549   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:50.063570   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:50 GMT
	I0213 22:30:50.063717   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"e15d93ce-6cc1-4cb6-8e3a-d3d69862c7a4","resourceVersion":"708","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_22_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0213 22:30:50.064080   32908 pod_ready.go:92] pod "kube-proxy-26ww9" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:50.064099   32908 pod_ready.go:81] duration metric: took 398.697708ms waiting for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:50.064111   32908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:50.261160   32908 request.go:629] Waited for 196.931952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:30:50.261230   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:30:50.261242   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:50.261255   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:50.261269   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:50.264222   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:50.264249   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:50.264257   32908 round_trippers.go:580]     Audit-Id: 5ed908c4-00cc-4394-a294-699522e287a6
	I0213 22:30:50.264263   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:50.264269   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:50.264274   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:50.264280   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:50.264285   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:50 GMT
	I0213 22:30:50.264513   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h5bvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7a12109-66cd-41a9-b7e7-4e53a27a4ca7","resourceVersion":"801","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0213 22:30:50.461353   32908 request.go:629] Waited for 196.4068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:50.461443   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:50.461451   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:50.461463   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:50.461480   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:50.464480   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:50.464500   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:50.464516   32908 round_trippers.go:580]     Audit-Id: f397848d-5d0a-4dda-9aa3-762151dacce4
	I0213 22:30:50.464524   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:50.464532   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:50.464539   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:50.464547   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:50.464556   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:50 GMT
	I0213 22:30:50.464667   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:50.464989   32908 pod_ready.go:97] node "multinode-413653" hosting pod "kube-proxy-h5bvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:50.465011   32908 pod_ready.go:81] duration metric: took 400.888419ms waiting for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	E0213 22:30:50.465023   32908 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-413653" hosting pod "kube-proxy-h5bvp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:50.465037   32908 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:50.661130   32908 request.go:629] Waited for 196.010158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:30:50.661260   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:30:50.661276   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:50.661287   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:50.661295   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:50.667078   32908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0213 22:30:50.667105   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:50.667125   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:50.667133   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:50 GMT
	I0213 22:30:50.667141   32908 round_trippers.go:580]     Audit-Id: 5c0ed25b-3f46-47ea-9c6e-baa6d1bd94c3
	I0213 22:30:50.667147   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:50.667154   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:50.667161   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:50.667294   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4ggx","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9fa1c43-43a7-4737-8b10-e5327e355e9a","resourceVersion":"687","creationTimestamp":"2024-02-13T22:22:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0213 22:30:50.861142   32908 request.go:629] Waited for 193.408492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:30:50.861241   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:30:50.861249   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:50.861260   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:50.861278   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:50.863946   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:50.863978   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:50.863988   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:50.863997   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:50.864005   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:50.864017   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:50.864028   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:50 GMT
	I0213 22:30:50.864044   32908 round_trippers.go:580]     Audit-Id: 80c1f1e4-2293-4fc2-8fa8-c02e7fd7272a
	I0213 22:30:50.864158   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m03","uid":"3fd11080-7896-4845-a0ac-96b51f08d0cd","resourceVersion":"707","creationTimestamp":"2024-02-13T22:22:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_22_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0213 22:30:50.864511   32908 pod_ready.go:92] pod "kube-proxy-k4ggx" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:50.864533   32908 pod_ready.go:81] duration metric: took 399.48734ms waiting for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:50.864546   32908 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:51.061601   32908 request.go:629] Waited for 196.990605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:30:51.061710   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:30:51.061722   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:51.061733   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:51.061743   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:51.064733   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:51.064758   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:51.064769   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:51.064777   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:51.064786   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:51 GMT
	I0213 22:30:51.064799   32908 round_trippers.go:580]     Audit-Id: 6e722322-d963-4e65-9a26-bacb8f435e25
	I0213 22:30:51.064813   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:51.064821   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:51.065156   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-413653","namespace":"kube-system","uid":"08710d51-793f-4606-9075-b5ab7331893e","resourceVersion":"773","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.mirror":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.seen":"2024-02-13T22:20:28.219616670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I0213 22:30:51.260991   32908 request.go:629] Waited for 195.421216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:51.261085   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:51.261093   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:51.261107   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:51.261116   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:51.264645   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:51.264675   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:51.264685   32908 round_trippers.go:580]     Audit-Id: b89a5b92-202b-43bc-8261-adbf58cdbd63
	I0213 22:30:51.264694   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:51.264702   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:51.264710   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:51.264718   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:51.264726   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:51 GMT
	I0213 22:30:51.264911   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:51.265340   32908 pod_ready.go:97] node "multinode-413653" hosting pod "kube-scheduler-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:51.265365   32908 pod_ready.go:81] duration metric: took 400.807558ms waiting for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	E0213 22:30:51.265378   32908 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-413653" hosting pod "kube-scheduler-multinode-413653" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-413653" has status "Ready":"False"
	I0213 22:30:51.265390   32908 pod_ready.go:38] duration metric: took 1.691693341s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:30:51.265414   32908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 22:30:51.277283   32908 command_runner.go:130] > -16
	I0213 22:30:51.277314   32908 ops.go:34] apiserver oom_adj: -16
	I0213 22:30:51.277322   32908 kubeadm.go:640] restartCluster took 22.727141469s
	I0213 22:30:51.277336   32908 kubeadm.go:406] StartCluster complete in 22.779638679s
	I0213 22:30:51.277351   32908 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:30:51.277416   32908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:30:51.278033   32908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:30:51.278260   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 22:30:51.278268   32908 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 22:30:51.281019   32908 out.go:177] * Enabled addons: 
	I0213 22:30:51.278535   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:30:51.278545   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:30:51.282188   32908 addons.go:505] enable addons completed in 3.878436ms: enabled=[]
	I0213 22:30:51.282410   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:30:51.282712   32908 round_trippers.go:463] GET https://192.168.39.81:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0213 22:30:51.282724   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:51.282731   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:51.282737   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:51.285536   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:51.285556   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:51.285562   32908 round_trippers.go:580]     Audit-Id: cd72a1c0-5bbe-4716-bbc0-087cc69d3e84
	I0213 22:30:51.285568   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:51.285573   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:51.285578   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:51.285584   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:51.285589   32908 round_trippers.go:580]     Content-Length: 291
	I0213 22:30:51.285594   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:51 GMT
	I0213 22:30:51.285672   32908 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eccb91db-2bff-44e5-a49d-713d6c3d3d2b","resourceVersion":"799","creationTimestamp":"2024-02-13T22:20:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0213 22:30:51.285881   32908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-413653" context rescaled to 1 replicas
	I0213 22:30:51.285919   32908 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 22:30:51.287386   32908 out.go:177] * Verifying Kubernetes components...
	I0213 22:30:51.288557   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:30:51.377219   32908 command_runner.go:130] > apiVersion: v1
	I0213 22:30:51.377244   32908 command_runner.go:130] > data:
	I0213 22:30:51.377249   32908 command_runner.go:130] >   Corefile: |
	I0213 22:30:51.377252   32908 command_runner.go:130] >     .:53 {
	I0213 22:30:51.377260   32908 command_runner.go:130] >         log
	I0213 22:30:51.377264   32908 command_runner.go:130] >         errors
	I0213 22:30:51.377268   32908 command_runner.go:130] >         health {
	I0213 22:30:51.377272   32908 command_runner.go:130] >            lameduck 5s
	I0213 22:30:51.377275   32908 command_runner.go:130] >         }
	I0213 22:30:51.377285   32908 command_runner.go:130] >         ready
	I0213 22:30:51.377290   32908 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0213 22:30:51.377295   32908 command_runner.go:130] >            pods insecure
	I0213 22:30:51.377302   32908 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0213 22:30:51.377306   32908 command_runner.go:130] >            ttl 30
	I0213 22:30:51.377310   32908 command_runner.go:130] >         }
	I0213 22:30:51.377316   32908 command_runner.go:130] >         prometheus :9153
	I0213 22:30:51.377320   32908 command_runner.go:130] >         hosts {
	I0213 22:30:51.377325   32908 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0213 22:30:51.377335   32908 command_runner.go:130] >            fallthrough
	I0213 22:30:51.377339   32908 command_runner.go:130] >         }
	I0213 22:30:51.377344   32908 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0213 22:30:51.377348   32908 command_runner.go:130] >            max_concurrent 1000
	I0213 22:30:51.377355   32908 command_runner.go:130] >         }
	I0213 22:30:51.377358   32908 command_runner.go:130] >         cache 30
	I0213 22:30:51.377363   32908 command_runner.go:130] >         loop
	I0213 22:30:51.377367   32908 command_runner.go:130] >         reload
	I0213 22:30:51.377373   32908 command_runner.go:130] >         loadbalance
	I0213 22:30:51.377379   32908 command_runner.go:130] >     }
	I0213 22:30:51.377383   32908 command_runner.go:130] > kind: ConfigMap
	I0213 22:30:51.377388   32908 command_runner.go:130] > metadata:
	I0213 22:30:51.377392   32908 command_runner.go:130] >   creationTimestamp: "2024-02-13T22:20:28Z"
	I0213 22:30:51.377396   32908 command_runner.go:130] >   name: coredns
	I0213 22:30:51.377403   32908 command_runner.go:130] >   namespace: kube-system
	I0213 22:30:51.377408   32908 command_runner.go:130] >   resourceVersion: "364"
	I0213 22:30:51.377412   32908 command_runner.go:130] >   uid: 34695e9e-4289-42d5-b045-19af5121f4b6
	I0213 22:30:51.379744   32908 node_ready.go:35] waiting up to 6m0s for node "multinode-413653" to be "Ready" ...
	I0213 22:30:51.379975   32908 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 22:30:51.461142   32908 request.go:629] Waited for 81.301105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:51.461231   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:51.461241   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:51.461252   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:51.461263   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:51.468036   32908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0213 22:30:51.468060   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:51.468074   32908 round_trippers.go:580]     Audit-Id: fdf97e42-569f-4e38-b124-0cdd2b74e14e
	I0213 22:30:51.468082   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:51.468092   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:51.468101   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:51.468113   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:51.468125   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:51 GMT
	I0213 22:30:51.468458   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:51.880351   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:51.880373   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:51.880381   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:51.880387   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:51.883277   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:51.883298   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:51.883305   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:51 GMT
	I0213 22:30:51.883310   32908 round_trippers.go:580]     Audit-Id: 1ed23912-c6c1-4a4f-aa45-1267243ce771
	I0213 22:30:51.883315   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:51.883320   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:51.883325   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:51.883331   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:51.883472   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:52.380168   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:52.380201   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:52.380213   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:52.380222   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:52.383202   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:52.383223   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:52.383230   32908 round_trippers.go:580]     Audit-Id: e67430a2-ba21-4a04-8db9-b5675b239f98
	I0213 22:30:52.383236   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:52.383243   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:52.383251   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:52.383260   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:52.383269   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:52 GMT
	I0213 22:30:52.383497   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:52.880167   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:52.880199   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:52.880210   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:52.880217   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:52.883692   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:52.883716   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:52.883727   32908 round_trippers.go:580]     Audit-Id: 3bb22036-69f0-42f0-a11e-096db29362cd
	I0213 22:30:52.883735   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:52.883743   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:52.883761   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:52.883782   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:52.883790   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:52 GMT
	I0213 22:30:52.884258   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"719","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I0213 22:30:53.380966   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:53.380992   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:53.381011   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:53.381017   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:53.383754   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:53.383778   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:53.383787   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:53.383797   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:53.383803   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:53 GMT
	I0213 22:30:53.383822   32908 round_trippers.go:580]     Audit-Id: 495a37b4-98b3-4b3d-8c3b-b2495120db01
	I0213 22:30:53.383830   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:53.383838   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:53.384151   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:53.384443   32908 node_ready.go:49] node "multinode-413653" has status "Ready":"True"
	I0213 22:30:53.384465   32908 node_ready.go:38] duration metric: took 2.004689029s waiting for node "multinode-413653" to be "Ready" ...
	I0213 22:30:53.384477   32908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:30:53.384544   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:30:53.384555   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:53.384565   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:53.384574   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:53.390269   32908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0213 22:30:53.390292   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:53.390302   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:53.390310   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:53 GMT
	I0213 22:30:53.390317   32908 round_trippers.go:580]     Audit-Id: 37ac7c3e-ca4b-4637-a752-1bc857e4373b
	I0213 22:30:53.390326   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:53.390332   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:53.390339   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:53.393951   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"832"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82916 chars]
	I0213 22:30:53.396398   32908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:53.396490   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:53.396501   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:53.396511   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:53.396521   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:53.401185   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:30:53.401204   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:53.401216   32908 round_trippers.go:580]     Audit-Id: e14a4c02-778e-4b96-9130-ab81b52c1be5
	I0213 22:30:53.401224   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:53.401231   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:53.401240   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:53.401251   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:53.401260   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:53 GMT
	I0213 22:30:53.401481   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:53.402016   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:53.402034   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:53.402045   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:53.402055   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:53.403788   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:53.403808   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:53.403824   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:53.403832   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:53.403840   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:53 GMT
	I0213 22:30:53.403849   32908 round_trippers.go:580]     Audit-Id: 263d4da9-f3cc-4b95-92bc-478248a207c6
	I0213 22:30:53.403859   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:53.403869   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:53.404208   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:53.896867   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:53.896894   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:53.896902   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:53.896908   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:53.900023   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:53.900052   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:53.900061   32908 round_trippers.go:580]     Audit-Id: 1370c753-8bff-4f4b-a34c-aee9f807ece1
	I0213 22:30:53.900069   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:53.900077   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:53.900088   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:53.900096   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:53.900104   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:53 GMT
	I0213 22:30:53.900628   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:53.901110   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:53.901124   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:53.901132   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:53.901140   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:53.903533   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:53.903550   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:53.903559   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:53.903567   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:53.903580   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:53 GMT
	I0213 22:30:53.903590   32908 round_trippers.go:580]     Audit-Id: 314363ce-0bf7-4739-b65f-58df80fadd24
	I0213 22:30:53.903600   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:53.903617   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:53.903799   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:54.397551   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:54.397583   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:54.397592   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:54.397598   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:54.400749   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:54.400774   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:54.400789   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:54.400797   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:54.400807   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:54.400816   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:54 GMT
	I0213 22:30:54.400826   32908 round_trippers.go:580]     Audit-Id: 08661594-e099-40c6-bd5a-3be03d7da729
	I0213 22:30:54.400838   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:54.401101   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:54.401536   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:54.401552   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:54.401562   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:54.401571   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:54.403892   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:54.403915   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:54.403924   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:54.403932   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:54.403940   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:54 GMT
	I0213 22:30:54.403948   32908 round_trippers.go:580]     Audit-Id: f1a136a4-c3e2-45aa-b027-417554f41a93
	I0213 22:30:54.403956   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:54.403968   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:54.404097   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:54.897207   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:54.897247   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:54.897259   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:54.897269   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:54.900266   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:54.900294   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:54.900304   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:54.900312   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:54.900320   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:54.900328   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:54 GMT
	I0213 22:30:54.900336   32908 round_trippers.go:580]     Audit-Id: 5b5b5592-5cf0-4198-ab3f-0f6d2303691e
	I0213 22:30:54.900344   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:54.901130   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:54.901567   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:54.901580   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:54.901587   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:54.901596   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:54.903755   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:54.903770   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:54.903778   32908 round_trippers.go:580]     Audit-Id: a40a4280-d736-4899-95e8-64b5c7e1adeb
	I0213 22:30:54.903786   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:54.903795   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:54.903805   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:54.903823   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:54.903831   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:54 GMT
	I0213 22:30:54.904066   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:55.396774   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:55.396808   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:55.396820   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:55.396830   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:55.403304   32908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0213 22:30:55.403337   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:55.403357   32908 round_trippers.go:580]     Audit-Id: 4d54af1d-8992-4bb6-aa68-a8c03b265e30
	I0213 22:30:55.403363   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:55.403368   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:55.403373   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:55.403378   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:55.403383   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:55 GMT
	I0213 22:30:55.405007   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:55.405566   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:55.405595   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:55.405603   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:55.405613   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:55.408099   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:55.408126   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:55.408136   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:55.408145   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:55 GMT
	I0213 22:30:55.408153   32908 round_trippers.go:580]     Audit-Id: 6f084e6f-87a8-4243-83c2-3107f91bafba
	I0213 22:30:55.408167   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:55.408174   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:55.408181   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:55.408477   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:55.408894   32908 pod_ready.go:102] pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace has status "Ready":"False"
	I0213 22:30:55.897246   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:55.897280   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:55.897291   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:55.897301   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:55.902392   32908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0213 22:30:55.902416   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:55.902423   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:55.902428   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:55.902433   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:55.902438   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:55 GMT
	I0213 22:30:55.902443   32908 round_trippers.go:580]     Audit-Id: 5255ab22-4dd8-48c7-877f-639a9951c874
	I0213 22:30:55.902448   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:55.903409   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:55.904037   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:55.904059   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:55.904070   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:55.904080   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:55.907507   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:55.907535   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:55.907546   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:55.907553   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:55 GMT
	I0213 22:30:55.907562   32908 round_trippers.go:580]     Audit-Id: 028a8e2a-71a2-4b9b-bb9c-20f664abbf35
	I0213 22:30:55.907572   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:55.907581   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:55.907590   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:55.909971   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:56.396615   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:56.396642   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.396650   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.396656   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.399749   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:56.399784   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.399794   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.399801   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.399808   32908 round_trippers.go:580]     Audit-Id: 6f793779-f392-48ae-a91a-dff25b307f0c
	I0213 22:30:56.399814   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.399821   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.399831   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.400457   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"779","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I0213 22:30:56.400992   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:56.401008   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.401015   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.401020   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.403307   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:56.403329   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.403337   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.403345   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.403360   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.403369   32908 round_trippers.go:580]     Audit-Id: dfa4d927-982a-42b9-913e-7bf01cc97575
	I0213 22:30:56.403377   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.403384   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.403652   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:56.897546   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:30:56.897576   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.897589   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.897609   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.900796   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:56.900827   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.900838   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.900847   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.900852   32908 round_trippers.go:580]     Audit-Id: 20de4ffa-45db-4aa0-ad1e-501ec13fd64b
	I0213 22:30:56.900857   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.900863   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.900868   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.901096   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0213 22:30:56.901658   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:56.901675   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.901689   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.901699   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.904065   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:56.904090   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.904100   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.904109   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.904117   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.904127   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.904134   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.904149   32908 round_trippers.go:580]     Audit-Id: 336650d0-8d9f-4e79-aff7-0d79e4bbcac9
	I0213 22:30:56.904447   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:56.904855   32908 pod_ready.go:92] pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:56.904878   32908 pod_ready.go:81] duration metric: took 3.508452956s waiting for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:56.904890   32908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:56.904969   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-413653
	I0213 22:30:56.904980   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.904990   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.905003   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.907451   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:56.907473   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.907482   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.907490   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.907498   32908 round_trippers.go:580]     Audit-Id: 27186169-42b6-49ba-b389-a4ba5110ef85
	I0213 22:30:56.907505   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.907513   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.907521   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.907751   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-413653","namespace":"kube-system","uid":"6adf5771-f03b-47ca-ad97-384b664fb8ab","resourceVersion":"833","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.81:2379","kubernetes.io/config.hash":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.mirror":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.seen":"2024-02-13T22:20:28.219611587Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0213 22:30:56.908232   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:56.908251   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.908262   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.908272   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.910809   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:56.910825   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.910832   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.910837   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.910845   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.910851   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.910858   32908 round_trippers.go:580]     Audit-Id: 9e26cee1-bb2f-44ce-8570-5e03f664d6a8
	I0213 22:30:56.910867   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.911023   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:56.911420   32908 pod_ready.go:92] pod "etcd-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:56.911444   32908 pod_ready.go:81] duration metric: took 6.539642ms waiting for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:56.911467   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:56.911540   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:56.911550   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.911561   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.911574   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.913583   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:56.913597   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.913603   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.913608   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.913620   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.913634   32908 round_trippers.go:580]     Audit-Id: e25efff4-0cef-4994-ad17-97dcb71ecce0
	I0213 22:30:56.913647   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.913658   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.913798   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:56.914214   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:56.914229   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:56.914236   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:56.914242   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:56.916506   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:56.916531   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:56.916541   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:56 GMT
	I0213 22:30:56.916549   32908 round_trippers.go:580]     Audit-Id: 6d360dc0-ce50-4e97-b017-9d8f95f7e239
	I0213 22:30:56.916558   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:56.916566   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:56.916572   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:56.916580   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:56.916782   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:57.412522   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:57.412547   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:57.412558   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:57.412563   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:57.418027   32908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0213 22:30:57.418052   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:57.418065   32908 round_trippers.go:580]     Audit-Id: fd28dbc8-8c05-4398-9223-2895bf31ddc9
	I0213 22:30:57.418074   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:57.418084   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:57.418093   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:57.418100   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:57.418108   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:57 GMT
	I0213 22:30:57.419203   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:57.419620   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:57.419634   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:57.419642   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:57.419647   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:57.422612   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:57.422634   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:57.422645   32908 round_trippers.go:580]     Audit-Id: b12ab04d-9ac4-4c06-837e-328147e6471d
	I0213 22:30:57.422655   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:57.422667   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:57.422675   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:57.422684   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:57.422694   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:57 GMT
	I0213 22:30:57.423329   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:57.911962   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:57.911988   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:57.911996   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:57.912002   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:57.915114   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:57.915171   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:57.915179   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:57.915185   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:57 GMT
	I0213 22:30:57.915190   32908 round_trippers.go:580]     Audit-Id: f06bdc25-a50d-4ae9-a803-2e79fddfe8f3
	I0213 22:30:57.915195   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:57.915200   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:57.915206   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:57.915776   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:57.916196   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:57.916209   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:57.916216   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:57.916227   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:57.918919   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:57.918940   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:57.918948   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:57.918956   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:57.918965   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:57.918974   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:57 GMT
	I0213 22:30:57.918983   32908 round_trippers.go:580]     Audit-Id: 6192c3c4-af32-4ffb-8471-c01892b68731
	I0213 22:30:57.918995   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:57.919729   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:58.412491   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:58.412517   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:58.412525   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:58.412531   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:58.415724   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:58.415747   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:58.415757   32908 round_trippers.go:580]     Audit-Id: b5c25461-62fe-4c6e-a530-f33d130a68ed
	I0213 22:30:58.415766   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:58.415781   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:58.415786   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:58.415791   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:58.415797   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:58 GMT
	I0213 22:30:58.416001   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:58.416421   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:58.416435   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:58.416447   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:58.416453   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:58.419079   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:58.419101   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:58.419111   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:58.419119   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:58.419126   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:58.419133   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:58 GMT
	I0213 22:30:58.419140   32908 round_trippers.go:580]     Audit-Id: 8bf131c7-1074-46db-ad7b-44ff40b9ec93
	I0213 22:30:58.419148   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:58.419540   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:58.912488   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:58.912526   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:58.912535   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:58.912541   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:58.915738   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:58.915770   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:58.915781   32908 round_trippers.go:580]     Audit-Id: a398a6de-99f9-4b2b-97f5-a8f381efffdb
	I0213 22:30:58.915790   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:58.915798   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:58.915804   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:58.915809   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:58.915814   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:58 GMT
	I0213 22:30:58.916605   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:58.917017   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:58.917031   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:58.917039   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:58.917047   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:58.919571   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:58.919594   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:58.919611   32908 round_trippers.go:580]     Audit-Id: 31bd41df-149c-48a1-9366-601913251f0a
	I0213 22:30:58.919620   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:58.919627   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:58.919639   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:58.919644   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:58.919669   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:58 GMT
	I0213 22:30:58.920041   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:58.920348   32908 pod_ready.go:102] pod "kube-apiserver-multinode-413653" in "kube-system" namespace has status "Ready":"False"
	I0213 22:30:59.411736   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:59.411767   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.411775   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.411781   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.415388   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:30:59.415420   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.415431   32908 round_trippers.go:580]     Audit-Id: 4aaffb13-2e7c-432d-bbe9-4988bf41924c
	I0213 22:30:59.415440   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.415448   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.415475   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.415481   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.415487   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.415906   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"778","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I0213 22:30:59.416318   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:59.416332   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.416340   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.416346   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.418875   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:59.418897   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.418904   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.418909   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.418914   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.418920   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.418929   32908 round_trippers.go:580]     Audit-Id: aea43495-34c0-4cff-b689-d289d678623b
	I0213 22:30:59.418939   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.419109   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:59.912190   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:30:59.912215   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.912223   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.912229   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.917315   32908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0213 22:30:59.917337   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.917344   32908 round_trippers.go:580]     Audit-Id: a69f8592-7ac2-4987-b928-5a61a87b858c
	I0213 22:30:59.917349   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.917354   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.917359   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.917364   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.917374   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.918523   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"860","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0213 22:30:59.919017   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:59.919037   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.919047   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.919058   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.921784   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:59.921803   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.921813   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.921821   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.921826   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.921831   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.921836   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.921841   32908 round_trippers.go:580]     Audit-Id: 058a4d16-335c-4e8c-966b-6666944089a4
	I0213 22:30:59.922384   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:59.922675   32908 pod_ready.go:92] pod "kube-apiserver-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:59.922691   32908 pod_ready.go:81] duration metric: took 3.011212378s waiting for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:59.922699   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:59.922745   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-413653
	I0213 22:30:59.922753   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.922760   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.922765   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.925458   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:59.925476   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.925482   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.925487   32908 round_trippers.go:580]     Audit-Id: 601fe7f7-d506-43ed-8e8e-fef2bd67cdd8
	I0213 22:30:59.925492   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.925497   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.925502   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.925507   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.926205   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-413653","namespace":"kube-system","uid":"1d3432c0-f2cd-4371-9599-9a119dc1a8ab","resourceVersion":"835","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.mirror":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.seen":"2024-02-13T22:20:28.219615864Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0213 22:30:59.926561   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:30:59.926573   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.926579   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.926585   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.929146   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:30:59.929174   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.929183   32908 round_trippers.go:580]     Audit-Id: 76a19740-cf85-44db-a6c3-36be61ad2394
	I0213 22:30:59.929190   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.929197   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.929204   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.929212   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.929223   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.929811   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:30:59.930173   32908 pod_ready.go:92] pod "kube-controller-manager-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:59.930191   32908 pod_ready.go:81] duration metric: took 7.486924ms waiting for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:59.930200   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:59.930246   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:30:59.930254   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.930261   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.930267   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.932166   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:59.932179   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.932185   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.932195   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.932200   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.932205   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.932213   32908 round_trippers.go:580]     Audit-Id: 7d444015-2aed-4535-b769-0f002e150b4a
	I0213 22:30:59.932221   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.932434   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-26ww9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b00e8eb-8829-460d-a162-7fe8c783c260","resourceVersion":"480","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0213 22:30:59.932902   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:30:59.932920   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.932927   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.932932   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.934774   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:59.934793   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.934802   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.934810   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.934818   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.934827   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.934839   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.934847   32908 round_trippers.go:580]     Audit-Id: 01e92375-b80c-431b-bd31-57fb8e0bd04c
	I0213 22:30:59.935163   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"e15d93ce-6cc1-4cb6-8e3a-d3d69862c7a4","resourceVersion":"708","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_22_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0213 22:30:59.935432   32908 pod_ready.go:92] pod "kube-proxy-26ww9" in "kube-system" namespace has status "Ready":"True"
	I0213 22:30:59.935448   32908 pod_ready.go:81] duration metric: took 5.241918ms waiting for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:59.935458   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:30:59.935506   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:30:59.935516   32908 round_trippers.go:469] Request Headers:
	I0213 22:30:59.935529   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:30:59.935539   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:30:59.937413   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:30:59.937430   32908 round_trippers.go:577] Response Headers:
	I0213 22:30:59.937440   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:30:59.937448   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:30:59.937455   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:30:59.937464   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:30:59.937474   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:30:59 GMT
	I0213 22:30:59.937485   32908 round_trippers.go:580]     Audit-Id: 148e00e6-4313-468a-a76f-0d9858dd6934
	I0213 22:30:59.937599   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h5bvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7a12109-66cd-41a9-b7e7-4e53a27a4ca7","resourceVersion":"801","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0213 22:31:00.061385   32908 request.go:629] Waited for 123.352032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:31:00.061487   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:31:00.061496   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:00.061508   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:00.061519   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:00.065088   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:31:00.065114   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:00.065125   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:00.065130   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:00.065136   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:00.065141   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:00.065146   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:00 GMT
	I0213 22:31:00.065151   32908 round_trippers.go:580]     Audit-Id: 3cfcd727-33f3-45a3-b556-47c8a5cf32cc
	I0213 22:31:00.066219   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:31:00.066556   32908 pod_ready.go:92] pod "kube-proxy-h5bvp" in "kube-system" namespace has status "Ready":"True"
	I0213 22:31:00.066579   32908 pod_ready.go:81] duration metric: took 131.10881ms waiting for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:31:00.066592   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:31:00.261079   32908 request.go:629] Waited for 194.411357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:31:00.261171   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:31:00.261180   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:00.261191   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:00.261202   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:00.264414   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:31:00.264445   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:00.264460   32908 round_trippers.go:580]     Audit-Id: c883ee39-3886-4743-b02f-541894f1b353
	I0213 22:31:00.264469   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:00.264477   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:00.264486   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:00.264495   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:00.264503   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:00 GMT
	I0213 22:31:00.264985   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4ggx","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9fa1c43-43a7-4737-8b10-e5327e355e9a","resourceVersion":"687","creationTimestamp":"2024-02-13T22:22:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0213 22:31:00.460714   32908 request.go:629] Waited for 195.290329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:31:00.460799   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:31:00.460804   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:00.460811   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:00.460818   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:00.469091   32908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0213 22:31:00.469124   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:00.469135   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:00.469144   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:00 GMT
	I0213 22:31:00.469151   32908 round_trippers.go:580]     Audit-Id: ed9c836c-7f64-49fa-8c23-10d8dc663df0
	I0213 22:31:00.469166   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:00.469175   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:00.469182   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:00.469318   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m03","uid":"3fd11080-7896-4845-a0ac-96b51f08d0cd","resourceVersion":"707","creationTimestamp":"2024-02-13T22:22:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_22_52_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0213 22:31:00.469696   32908 pod_ready.go:92] pod "kube-proxy-k4ggx" in "kube-system" namespace has status "Ready":"True"
	I0213 22:31:00.469721   32908 pod_ready.go:81] duration metric: took 403.121436ms waiting for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:31:00.469734   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:31:00.660557   32908 request.go:629] Waited for 190.757453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:31:00.660664   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:31:00.660670   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:00.660678   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:00.660685   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:00.664337   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:31:00.664368   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:00.664379   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:00.664388   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:00.664396   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:00.664405   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:00.664417   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:00 GMT
	I0213 22:31:00.664426   32908 round_trippers.go:580]     Audit-Id: 93fc24eb-ad9f-4509-a7b0-5680bf4733aa
	I0213 22:31:00.664555   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-413653","namespace":"kube-system","uid":"08710d51-793f-4606-9075-b5ab7331893e","resourceVersion":"861","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.mirror":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.seen":"2024-02-13T22:20:28.219616670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0213 22:31:00.861392   32908 request.go:629] Waited for 196.40465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:31:00.861469   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:31:00.861477   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:00.861488   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:00.861499   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:00.863993   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:31:00.864060   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:00.864079   32908 round_trippers.go:580]     Audit-Id: 9cd41a8c-13ce-42c3-9518-3817b08e8f81
	I0213 22:31:00.864087   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:00.864095   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:00.864101   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:00.864109   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:00.864119   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:00 GMT
	I0213 22:31:00.864274   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I0213 22:31:00.864901   32908 pod_ready.go:92] pod "kube-scheduler-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:31:00.864960   32908 pod_ready.go:81] duration metric: took 395.207475ms waiting for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:31:00.864985   32908 pod_ready.go:38] duration metric: took 7.480496437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:31:00.865046   32908 api_server.go:52] waiting for apiserver process to appear ...
	I0213 22:31:00.865112   32908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:31:00.880431   32908 command_runner.go:130] > 1066
	I0213 22:31:00.880845   32908 api_server.go:72] duration metric: took 9.594884188s to wait for apiserver process to appear ...
	I0213 22:31:00.880870   32908 api_server.go:88] waiting for apiserver healthz status ...
	I0213 22:31:00.880889   32908 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:31:00.886944   32908 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0213 22:31:00.887051   32908 round_trippers.go:463] GET https://192.168.39.81:8443/version
	I0213 22:31:00.887065   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:00.887076   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:00.887088   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:00.888312   32908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0213 22:31:00.888334   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:00.888344   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:00.888356   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:00.888367   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:00.888379   32908 round_trippers.go:580]     Content-Length: 264
	I0213 22:31:00.888390   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:00 GMT
	I0213 22:31:00.888400   32908 round_trippers.go:580]     Audit-Id: 002fc02a-7a16-4deb-bc70-7325606d1dfb
	I0213 22:31:00.888412   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:00.888437   32908 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0213 22:31:00.888485   32908 api_server.go:141] control plane version: v1.28.4
	I0213 22:31:00.888503   32908 api_server.go:131] duration metric: took 7.627238ms to wait for apiserver health ...
	I0213 22:31:00.888512   32908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 22:31:01.060992   32908 request.go:629] Waited for 172.390925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:31:01.061056   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:31:01.061071   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:01.061101   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:01.061120   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:01.068327   32908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0213 22:31:01.068353   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:01.068360   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:01.068369   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:01.068377   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:01.068386   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:01 GMT
	I0213 22:31:01.068393   32908 round_trippers.go:580]     Audit-Id: fb06333d-71bf-422e-9e81-765de326a466
	I0213 22:31:01.068401   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:01.071263   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I0213 22:31:01.073684   32908 system_pods.go:59] 12 kube-system pods found
	I0213 22:31:01.073709   32908 system_pods.go:61] "coredns-5dd5756b68-lq7xh" [2543314d-46b0-490c-b0e1-74f4777913f9] Running
	I0213 22:31:01.073713   32908 system_pods.go:61] "etcd-multinode-413653" [6adf5771-f03b-47ca-ad97-384b664fb8ab] Running
	I0213 22:31:01.073719   32908 system_pods.go:61] "kindnet-4m5lx" [9c27db1a-aefc-4f82-921d-3f412fbeed91] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0213 22:31:01.073726   32908 system_pods.go:61] "kindnet-p2bqz" [c0ca435d-2301-48c0-a56b-2f147217fb91] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0213 22:31:01.073730   32908 system_pods.go:61] "kindnet-shxmz" [1684b3fd-4115-4ab7-88d4-dc1c95680525] Running
	I0213 22:31:01.073735   32908 system_pods.go:61] "kube-apiserver-multinode-413653" [1540a1dc-5f90-45b2-8d9e-0f0a1581328a] Running
	I0213 22:31:01.073739   32908 system_pods.go:61] "kube-controller-manager-multinode-413653" [1d3432c0-f2cd-4371-9599-9a119dc1a8ab] Running
	I0213 22:31:01.073742   32908 system_pods.go:61] "kube-proxy-26ww9" [2b00e8eb-8829-460d-a162-7fe8c783c260] Running
	I0213 22:31:01.073746   32908 system_pods.go:61] "kube-proxy-h5bvp" [d7a12109-66cd-41a9-b7e7-4e53a27a4ca7] Running
	I0213 22:31:01.073749   32908 system_pods.go:61] "kube-proxy-k4ggx" [b9fa1c43-43a7-4737-8b10-e5327e355e9a] Running
	I0213 22:31:01.073753   32908 system_pods.go:61] "kube-scheduler-multinode-413653" [08710d51-793f-4606-9075-b5ab7331893e] Running
	I0213 22:31:01.073756   32908 system_pods.go:61] "storage-provisioner" [aecede5e-5ae2-4239-b920-ab1af32c4d38] Running
	I0213 22:31:01.073762   32908 system_pods.go:74] duration metric: took 185.241373ms to wait for pod list to return data ...
	I0213 22:31:01.073769   32908 default_sa.go:34] waiting for default service account to be created ...
	I0213 22:31:01.261256   32908 request.go:629] Waited for 187.416405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/default/serviceaccounts
	I0213 22:31:01.261343   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/default/serviceaccounts
	I0213 22:31:01.261355   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:01.261367   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:01.261375   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:01.264888   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:31:01.264916   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:01.264923   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:01.264929   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:01.264934   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:01.264940   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:01.264945   32908 round_trippers.go:580]     Content-Length: 261
	I0213 22:31:01.264950   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:01 GMT
	I0213 22:31:01.264955   32908 round_trippers.go:580]     Audit-Id: 1a8ba5ad-1336-4236-a652-e9a15df37d19
	I0213 22:31:01.264975   32908 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c3af0d37-b9af-4230-a286-76fc9667c9cf","resourceVersion":"313","creationTimestamp":"2024-02-13T22:20:40Z"}}]}
	I0213 22:31:01.265166   32908 default_sa.go:45] found service account: "default"
	I0213 22:31:01.265187   32908 default_sa.go:55] duration metric: took 191.413297ms for default service account to be created ...
	I0213 22:31:01.265197   32908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 22:31:01.460581   32908 request.go:629] Waited for 195.306631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:31:01.460665   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:31:01.460674   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:01.460684   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:01.460690   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:01.465648   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:31:01.465673   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:01.465682   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:01.465691   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:01.465699   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:01.465708   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:01 GMT
	I0213 22:31:01.465718   32908 round_trippers.go:580]     Audit-Id: 187ec845-5553-4ef9-9657-359e961ada4a
	I0213 22:31:01.465730   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:01.467909   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I0213 22:31:01.470269   32908 system_pods.go:86] 12 kube-system pods found
	I0213 22:31:01.470289   32908 system_pods.go:89] "coredns-5dd5756b68-lq7xh" [2543314d-46b0-490c-b0e1-74f4777913f9] Running
	I0213 22:31:01.470294   32908 system_pods.go:89] "etcd-multinode-413653" [6adf5771-f03b-47ca-ad97-384b664fb8ab] Running
	I0213 22:31:01.470301   32908 system_pods.go:89] "kindnet-4m5lx" [9c27db1a-aefc-4f82-921d-3f412fbeed91] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0213 22:31:01.470308   32908 system_pods.go:89] "kindnet-p2bqz" [c0ca435d-2301-48c0-a56b-2f147217fb91] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0213 22:31:01.470314   32908 system_pods.go:89] "kindnet-shxmz" [1684b3fd-4115-4ab7-88d4-dc1c95680525] Running
	I0213 22:31:01.470320   32908 system_pods.go:89] "kube-apiserver-multinode-413653" [1540a1dc-5f90-45b2-8d9e-0f0a1581328a] Running
	I0213 22:31:01.470328   32908 system_pods.go:89] "kube-controller-manager-multinode-413653" [1d3432c0-f2cd-4371-9599-9a119dc1a8ab] Running
	I0213 22:31:01.470333   32908 system_pods.go:89] "kube-proxy-26ww9" [2b00e8eb-8829-460d-a162-7fe8c783c260] Running
	I0213 22:31:01.470337   32908 system_pods.go:89] "kube-proxy-h5bvp" [d7a12109-66cd-41a9-b7e7-4e53a27a4ca7] Running
	I0213 22:31:01.470341   32908 system_pods.go:89] "kube-proxy-k4ggx" [b9fa1c43-43a7-4737-8b10-e5327e355e9a] Running
	I0213 22:31:01.470345   32908 system_pods.go:89] "kube-scheduler-multinode-413653" [08710d51-793f-4606-9075-b5ab7331893e] Running
	I0213 22:31:01.470350   32908 system_pods.go:89] "storage-provisioner" [aecede5e-5ae2-4239-b920-ab1af32c4d38] Running
	I0213 22:31:01.470357   32908 system_pods.go:126] duration metric: took 205.154061ms to wait for k8s-apps to be running ...
	I0213 22:31:01.470366   32908 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 22:31:01.470409   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:31:01.487620   32908 system_svc.go:56] duration metric: took 17.245555ms WaitForService to wait for kubelet.
	I0213 22:31:01.487650   32908 kubeadm.go:581] duration metric: took 10.20169381s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 22:31:01.487672   32908 node_conditions.go:102] verifying NodePressure condition ...
	I0213 22:31:01.661418   32908 request.go:629] Waited for 173.683571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes
	I0213 22:31:01.661479   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes
	I0213 22:31:01.661497   32908 round_trippers.go:469] Request Headers:
	I0213 22:31:01.661504   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:31:01.661510   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:31:01.665023   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:31:01.665044   32908 round_trippers.go:577] Response Headers:
	I0213 22:31:01.665051   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:31:01 GMT
	I0213 22:31:01.665057   32908 round_trippers.go:580]     Audit-Id: d6bc698d-ab7e-4729-8ddd-f96e1f52bc4a
	I0213 22:31:01.665062   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:31:01.665067   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:31:01.665072   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:31:01.665077   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:31:01.665424   32908 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"831","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16178 chars]
	I0213 22:31:01.666026   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:31:01.666045   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:31:01.666056   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:31:01.666060   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:31:01.666064   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:31:01.666067   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:31:01.666071   32908 node_conditions.go:105] duration metric: took 178.39391ms to run NodePressure ...
	I0213 22:31:01.666082   32908 start.go:228] waiting for startup goroutines ...
	I0213 22:31:01.666088   32908 start.go:233] waiting for cluster config update ...
	I0213 22:31:01.666095   32908 start.go:242] writing updated cluster config ...
	I0213 22:31:01.666520   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:31:01.666597   32908 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/config.json ...
	I0213 22:31:01.668917   32908 out.go:177] * Starting worker node multinode-413653-m02 in cluster multinode-413653
	I0213 22:31:01.670281   32908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 22:31:01.670303   32908 cache.go:56] Caching tarball of preloaded images
	I0213 22:31:01.670399   32908 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 22:31:01.670412   32908 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 22:31:01.670499   32908 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/config.json ...
	I0213 22:31:01.670684   32908 start.go:365] acquiring machines lock for multinode-413653-m02: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 22:31:01.670736   32908 start.go:369] acquired machines lock for "multinode-413653-m02" in 27.681µs
	I0213 22:31:01.670752   32908 start.go:96] Skipping create...Using existing machine configuration
	I0213 22:31:01.670763   32908 fix.go:54] fixHost starting: m02
	I0213 22:31:01.671024   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:31:01.671053   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:31:01.685375   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I0213 22:31:01.685820   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:31:01.686285   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:31:01.686309   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:31:01.686623   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:31:01.686818   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:31:01.686972   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetState
	I0213 22:31:01.688459   32908 fix.go:102] recreateIfNeeded on multinode-413653-m02: state=Running err=<nil>
	W0213 22:31:01.688478   32908 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 22:31:01.690434   32908 out.go:177] * Updating the running kvm2 "multinode-413653-m02" VM ...
	I0213 22:31:01.691769   32908 machine.go:88] provisioning docker machine ...
	I0213 22:31:01.691792   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:31:01.691995   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetMachineName
	I0213 22:31:01.692185   32908 buildroot.go:166] provisioning hostname "multinode-413653-m02"
	I0213 22:31:01.692206   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetMachineName
	I0213 22:31:01.692341   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:31:01.694614   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.695107   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:31:01.695137   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.695299   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:31:01.695474   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:01.695645   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:01.695795   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:31:01.695977   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:31:01.696356   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0213 22:31:01.696374   32908 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-413653-m02 && echo "multinode-413653-m02" | sudo tee /etc/hostname
	I0213 22:31:01.842081   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-413653-m02
	
	I0213 22:31:01.842113   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:31:01.844960   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.845266   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:31:01.845300   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.845525   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:31:01.845748   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:01.845901   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:01.846025   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:31:01.846171   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:31:01.846628   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0213 22:31:01.846656   32908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-413653-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-413653-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-413653-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 22:31:01.979141   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 22:31:01.979172   32908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 22:31:01.979187   32908 buildroot.go:174] setting up certificates
	I0213 22:31:01.979196   32908 provision.go:83] configureAuth start
	I0213 22:31:01.979210   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetMachineName
	I0213 22:31:01.979492   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetIP
	I0213 22:31:01.982328   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.982751   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:31:01.982788   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.982913   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:31:01.985303   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.985697   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:31:01.985726   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:01.985883   32908 provision.go:138] copyHostCerts
	I0213 22:31:01.985917   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:31:01.985961   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 22:31:01.985970   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:31:01.986053   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 22:31:01.986142   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:31:01.986169   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 22:31:01.986179   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:31:01.986219   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 22:31:01.986296   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:31:01.986330   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 22:31:01.986340   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:31:01.986379   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 22:31:01.986445   32908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.multinode-413653-m02 san=[192.168.39.94 192.168.39.94 localhost 127.0.0.1 minikube multinode-413653-m02]
	I0213 22:31:02.169330   32908 provision.go:172] copyRemoteCerts
	I0213 22:31:02.169398   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 22:31:02.169425   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:31:02.171854   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:02.172171   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:31:02.172205   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:02.172355   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:31:02.172581   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:02.172729   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:31:02.172866   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m02/id_rsa Username:docker}
	I0213 22:31:02.268839   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 22:31:02.268911   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 22:31:02.293206   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 22:31:02.293295   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0213 22:31:02.319491   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 22:31:02.319571   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 22:31:02.346463   32908 provision.go:86] duration metric: configureAuth took 367.253474ms
	I0213 22:31:02.346493   32908 buildroot.go:189] setting minikube options for container-runtime
	I0213 22:31:02.346764   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:31:02.346835   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:31:02.349173   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:02.349555   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:31:02.349588   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:31:02.349715   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:31:02.349925   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:02.350076   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:31:02.350195   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:31:02.350429   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:31:02.350876   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0213 22:31:02.350898   32908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 22:32:32.831520   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 22:32:32.831560   32908 machine.go:91] provisioned docker machine in 1m31.139774617s
	I0213 22:32:32.831574   32908 start.go:300] post-start starting for "multinode-413653-m02" (driver="kvm2")
	I0213 22:32:32.831590   32908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 22:32:32.831611   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:32:32.831959   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 22:32:32.831987   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:32:32.835333   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:32.835767   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:32:32.835804   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:32.836013   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:32:32.836201   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:32:32.836351   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:32:32.836529   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m02/id_rsa Username:docker}
	I0213 22:32:32.933145   32908 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 22:32:32.937368   32908 command_runner.go:130] > NAME=Buildroot
	I0213 22:32:32.937393   32908 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0213 22:32:32.937399   32908 command_runner.go:130] > ID=buildroot
	I0213 22:32:32.937408   32908 command_runner.go:130] > VERSION_ID=2021.02.12
	I0213 22:32:32.937415   32908 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0213 22:32:32.937447   32908 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 22:32:32.937464   32908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 22:32:32.937535   32908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 22:32:32.937624   32908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 22:32:32.937637   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /etc/ssl/certs/162002.pem
	I0213 22:32:32.937741   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 22:32:32.946444   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:32:32.972391   32908 start.go:303] post-start completed in 140.799883ms
	I0213 22:32:32.972421   32908 fix.go:56] fixHost completed within 1m31.301658004s
	I0213 22:32:32.972448   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:32:32.975119   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:32.975554   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:32:32.975591   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:32.975711   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:32:32.975927   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:32:32.976110   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:32:32.976252   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:32:32.976425   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:32:32.976777   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0213 22:32:32.976800   32908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 22:32:33.107611   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707863553.099435397
	
	I0213 22:32:33.107642   32908 fix.go:206] guest clock: 1707863553.099435397
	I0213 22:32:33.107652   32908 fix.go:219] Guest: 2024-02-13 22:32:33.099435397 +0000 UTC Remote: 2024-02-13 22:32:32.972425505 +0000 UTC m=+451.419140821 (delta=127.009892ms)
	I0213 22:32:33.107669   32908 fix.go:190] guest clock delta is within tolerance: 127.009892ms
	I0213 22:32:33.107704   32908 start.go:83] releasing machines lock for "multinode-413653-m02", held for 1m31.436930087s
	I0213 22:32:33.107733   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:32:33.108010   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetIP
	I0213 22:32:33.110866   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:33.111243   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:32:33.111270   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:33.113510   32908 out.go:177] * Found network options:
	I0213 22:32:33.115010   32908 out.go:177]   - NO_PROXY=192.168.39.81
	W0213 22:32:33.116358   32908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0213 22:32:33.116392   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:32:33.117102   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:32:33.117318   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:32:33.117415   32908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 22:32:33.117459   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	W0213 22:32:33.117545   32908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0213 22:32:33.117613   32908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 22:32:33.117629   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:32:33.120714   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:33.120989   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:33.121050   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:32:33.121081   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:33.121213   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:32:33.121353   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:32:33.121380   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:33.121403   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:32:33.121588   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:32:33.121596   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:32:33.121778   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:32:33.121771   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m02/id_rsa Username:docker}
	I0213 22:32:33.121915   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:32:33.122024   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m02/id_rsa Username:docker}
	I0213 22:32:33.376208   32908 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0213 22:32:33.376264   32908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 22:32:33.382889   32908 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0213 22:32:33.383163   32908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 22:32:33.383240   32908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 22:32:33.392898   32908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 22:32:33.392929   32908 start.go:475] detecting cgroup driver to use...
	I0213 22:32:33.392990   32908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 22:32:33.408625   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 22:32:33.422338   32908 docker.go:217] disabling cri-docker service (if available) ...
	I0213 22:32:33.422394   32908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 22:32:33.436747   32908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 22:32:33.451476   32908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 22:32:33.599370   32908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 22:32:33.742738   32908 docker.go:233] disabling docker service ...
	I0213 22:32:33.742822   32908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 22:32:33.759930   32908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 22:32:33.775914   32908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 22:32:33.915207   32908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 22:32:34.051600   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 22:32:34.067160   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 22:32:34.088397   32908 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0213 22:32:34.088440   32908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 22:32:34.088485   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:32:34.100149   32908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 22:32:34.100217   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:32:34.112708   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:32:34.123635   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:32:34.134860   32908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 22:32:34.145750   32908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 22:32:34.155260   32908 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0213 22:32:34.155359   32908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 22:32:34.165242   32908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 22:32:34.313665   32908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 22:32:39.462042   32908 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.148339238s)
	I0213 22:32:39.462067   32908 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 22:32:39.462110   32908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 22:32:39.467407   32908 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0213 22:32:39.467442   32908 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0213 22:32:39.467453   32908 command_runner.go:130] > Device: 16h/22d	Inode: 1205        Links: 1
	I0213 22:32:39.467465   32908 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0213 22:32:39.467473   32908 command_runner.go:130] > Access: 2024-02-13 22:32:39.383598807 +0000
	I0213 22:32:39.467482   32908 command_runner.go:130] > Modify: 2024-02-13 22:32:39.383598807 +0000
	I0213 22:32:39.467490   32908 command_runner.go:130] > Change: 2024-02-13 22:32:39.383598807 +0000
	I0213 22:32:39.467496   32908 command_runner.go:130] >  Birth: -
	I0213 22:32:39.467641   32908 start.go:543] Will wait 60s for crictl version
	I0213 22:32:39.467706   32908 ssh_runner.go:195] Run: which crictl
	I0213 22:32:39.471910   32908 command_runner.go:130] > /usr/bin/crictl
	I0213 22:32:39.471989   32908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 22:32:39.513322   32908 command_runner.go:130] > Version:  0.1.0
	I0213 22:32:39.513345   32908 command_runner.go:130] > RuntimeName:  cri-o
	I0213 22:32:39.513350   32908 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0213 22:32:39.513355   32908 command_runner.go:130] > RuntimeApiVersion:  v1
	I0213 22:32:39.514805   32908 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 22:32:39.514890   32908 ssh_runner.go:195] Run: crio --version
	I0213 22:32:39.570650   32908 command_runner.go:130] > crio version 1.24.1
	I0213 22:32:39.570679   32908 command_runner.go:130] > Version:          1.24.1
	I0213 22:32:39.570689   32908 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0213 22:32:39.570700   32908 command_runner.go:130] > GitTreeState:     dirty
	I0213 22:32:39.570709   32908 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0213 22:32:39.570715   32908 command_runner.go:130] > GoVersion:        go1.19.9
	I0213 22:32:39.570722   32908 command_runner.go:130] > Compiler:         gc
	I0213 22:32:39.570729   32908 command_runner.go:130] > Platform:         linux/amd64
	I0213 22:32:39.570736   32908 command_runner.go:130] > Linkmode:         dynamic
	I0213 22:32:39.570746   32908 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0213 22:32:39.570754   32908 command_runner.go:130] > SeccompEnabled:   true
	I0213 22:32:39.570761   32908 command_runner.go:130] > AppArmorEnabled:  false
	I0213 22:32:39.570836   32908 ssh_runner.go:195] Run: crio --version
	I0213 22:32:39.626070   32908 command_runner.go:130] > crio version 1.24.1
	I0213 22:32:39.626093   32908 command_runner.go:130] > Version:          1.24.1
	I0213 22:32:39.626100   32908 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0213 22:32:39.626104   32908 command_runner.go:130] > GitTreeState:     dirty
	I0213 22:32:39.626110   32908 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0213 22:32:39.626115   32908 command_runner.go:130] > GoVersion:        go1.19.9
	I0213 22:32:39.626119   32908 command_runner.go:130] > Compiler:         gc
	I0213 22:32:39.626123   32908 command_runner.go:130] > Platform:         linux/amd64
	I0213 22:32:39.626128   32908 command_runner.go:130] > Linkmode:         dynamic
	I0213 22:32:39.626136   32908 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0213 22:32:39.626143   32908 command_runner.go:130] > SeccompEnabled:   true
	I0213 22:32:39.626149   32908 command_runner.go:130] > AppArmorEnabled:  false
	I0213 22:32:39.629692   32908 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 22:32:39.631420   32908 out.go:177]   - env NO_PROXY=192.168.39.81
	I0213 22:32:39.632936   32908 main.go:141] libmachine: (multinode-413653-m02) Calling .GetIP
	I0213 22:32:39.635587   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:39.635902   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:32:39.635926   32908 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:32:39.636171   32908 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 22:32:39.641009   32908 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0213 22:32:39.641219   32908 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653 for IP: 192.168.39.94
	I0213 22:32:39.641248   32908 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:32:39.641420   32908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 22:32:39.641485   32908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 22:32:39.641503   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 22:32:39.641525   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 22:32:39.641543   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 22:32:39.641562   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 22:32:39.641629   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 22:32:39.641669   32908 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 22:32:39.641682   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 22:32:39.641727   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 22:32:39.641763   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 22:32:39.641797   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 22:32:39.641853   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:32:39.641911   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem -> /usr/share/ca-certificates/16200.pem
	I0213 22:32:39.641934   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /usr/share/ca-certificates/162002.pem
	I0213 22:32:39.641953   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:32:39.642298   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 22:32:39.667518   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 22:32:39.691354   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 22:32:39.716473   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 22:32:39.739853   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 22:32:39.764529   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 22:32:39.787217   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 22:32:39.809400   32908 ssh_runner.go:195] Run: openssl version
	I0213 22:32:39.816301   32908 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0213 22:32:39.816389   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 22:32:39.828088   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:32:39.832826   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:32:39.833098   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:32:39.833170   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:32:39.838708   32908 command_runner.go:130] > b5213941
	I0213 22:32:39.838950   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 22:32:39.848622   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 22:32:39.859641   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 22:32:39.864448   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:32:39.864476   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:32:39.864516   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 22:32:39.870204   32908 command_runner.go:130] > 51391683
	I0213 22:32:39.870294   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 22:32:39.880090   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 22:32:39.891596   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 22:32:39.896722   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:32:39.896751   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:32:39.896796   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 22:32:39.902510   32908 command_runner.go:130] > 3ec20f2e
	I0213 22:32:39.902591   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 22:32:39.912512   32908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 22:32:39.917528   32908 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 22:32:39.917582   32908 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 22:32:39.917687   32908 ssh_runner.go:195] Run: crio config
	I0213 22:32:39.969427   32908 command_runner.go:130] ! time="2024-02-13 22:32:39.961393432Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0213 22:32:39.969457   32908 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0213 22:32:39.978334   32908 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0213 22:32:39.978357   32908 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0213 22:32:39.978364   32908 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0213 22:32:39.978367   32908 command_runner.go:130] > #
	I0213 22:32:39.978374   32908 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0213 22:32:39.978380   32908 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0213 22:32:39.978388   32908 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0213 22:32:39.978398   32908 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0213 22:32:39.978404   32908 command_runner.go:130] > # reload'.
	I0213 22:32:39.978413   32908 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0213 22:32:39.978432   32908 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0213 22:32:39.978443   32908 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0213 22:32:39.978452   32908 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0213 22:32:39.978458   32908 command_runner.go:130] > [crio]
	I0213 22:32:39.978471   32908 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0213 22:32:39.978479   32908 command_runner.go:130] > # containers images, in this directory.
	I0213 22:32:39.978484   32908 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0213 22:32:39.978494   32908 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0213 22:32:39.978499   32908 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0213 22:32:39.978509   32908 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0213 22:32:39.978522   32908 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0213 22:32:39.978533   32908 command_runner.go:130] > storage_driver = "overlay"
	I0213 22:32:39.978542   32908 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0213 22:32:39.978556   32908 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0213 22:32:39.978566   32908 command_runner.go:130] > storage_option = [
	I0213 22:32:39.978577   32908 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0213 22:32:39.978583   32908 command_runner.go:130] > ]
	I0213 22:32:39.978594   32908 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0213 22:32:39.978603   32908 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0213 22:32:39.978610   32908 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0213 22:32:39.978619   32908 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0213 22:32:39.978632   32908 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0213 22:32:39.978645   32908 command_runner.go:130] > # always happen on a node reboot
	I0213 22:32:39.978656   32908 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0213 22:32:39.978669   32908 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0213 22:32:39.978678   32908 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0213 22:32:39.978688   32908 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0213 22:32:39.978695   32908 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0213 22:32:39.978703   32908 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0213 22:32:39.978716   32908 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0213 22:32:39.978726   32908 command_runner.go:130] > # internal_wipe = true
	I0213 22:32:39.978739   32908 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0213 22:32:39.978762   32908 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0213 22:32:39.978778   32908 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0213 22:32:39.978788   32908 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0213 22:32:39.978796   32908 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0213 22:32:39.978804   32908 command_runner.go:130] > [crio.api]
	I0213 22:32:39.978814   32908 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0213 22:32:39.978825   32908 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0213 22:32:39.978835   32908 command_runner.go:130] > # IP address on which the stream server will listen.
	I0213 22:32:39.978846   32908 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0213 22:32:39.978860   32908 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0213 22:32:39.978871   32908 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0213 22:32:39.978880   32908 command_runner.go:130] > # stream_port = "0"
	I0213 22:32:39.978888   32908 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0213 22:32:39.978897   32908 command_runner.go:130] > # stream_enable_tls = false
	I0213 22:32:39.978910   32908 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0213 22:32:39.978921   32908 command_runner.go:130] > # stream_idle_timeout = ""
	I0213 22:32:39.978935   32908 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0213 22:32:39.978946   32908 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0213 22:32:39.978955   32908 command_runner.go:130] > # minutes.
	I0213 22:32:39.978963   32908 command_runner.go:130] > # stream_tls_cert = ""
	I0213 22:32:39.978976   32908 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0213 22:32:39.978987   32908 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0213 22:32:39.978997   32908 command_runner.go:130] > # stream_tls_key = ""
	I0213 22:32:39.979011   32908 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0213 22:32:39.979026   32908 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0213 22:32:39.979038   32908 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0213 22:32:39.979046   32908 command_runner.go:130] > # stream_tls_ca = ""
	I0213 22:32:39.979060   32908 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0213 22:32:39.979070   32908 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0213 22:32:39.979085   32908 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0213 22:32:39.979095   32908 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0213 22:32:39.979119   32908 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0213 22:32:39.979132   32908 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0213 22:32:39.979141   32908 command_runner.go:130] > [crio.runtime]
	I0213 22:32:39.979152   32908 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0213 22:32:39.979161   32908 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0213 22:32:39.979166   32908 command_runner.go:130] > # "nofile=1024:2048"
	I0213 22:32:39.979179   32908 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0213 22:32:39.979187   32908 command_runner.go:130] > # default_ulimits = [
	I0213 22:32:39.979196   32908 command_runner.go:130] > # ]
	I0213 22:32:39.979207   32908 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0213 22:32:39.979216   32908 command_runner.go:130] > # no_pivot = false
	I0213 22:32:39.979227   32908 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0213 22:32:39.979240   32908 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0213 22:32:39.979250   32908 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0213 22:32:39.979257   32908 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0213 22:32:39.979268   32908 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0213 22:32:39.979280   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0213 22:32:39.979291   32908 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0213 22:32:39.979300   32908 command_runner.go:130] > # Cgroup setting for conmon
	I0213 22:32:39.979315   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0213 22:32:39.979325   32908 command_runner.go:130] > conmon_cgroup = "pod"
	I0213 22:32:39.979338   32908 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0213 22:32:39.979346   32908 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0213 22:32:39.979358   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0213 22:32:39.979369   32908 command_runner.go:130] > conmon_env = [
	I0213 22:32:39.979383   32908 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0213 22:32:39.979391   32908 command_runner.go:130] > ]
	I0213 22:32:39.979401   32908 command_runner.go:130] > # Additional environment variables to set for all the
	I0213 22:32:39.979412   32908 command_runner.go:130] > # containers. These are overridden if set in the
	I0213 22:32:39.979425   32908 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0213 22:32:39.979432   32908 command_runner.go:130] > # default_env = [
	I0213 22:32:39.979437   32908 command_runner.go:130] > # ]
	I0213 22:32:39.979447   32908 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0213 22:32:39.979455   32908 command_runner.go:130] > # selinux = false
	I0213 22:32:39.979469   32908 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0213 22:32:39.979484   32908 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0213 22:32:39.979496   32908 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0213 22:32:39.979506   32908 command_runner.go:130] > # seccomp_profile = ""
	I0213 22:32:39.979517   32908 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0213 22:32:39.979527   32908 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0213 22:32:39.979540   32908 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0213 22:32:39.979552   32908 command_runner.go:130] > # which might increase security.
	I0213 22:32:39.979561   32908 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0213 22:32:39.979574   32908 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0213 22:32:39.979591   32908 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0213 22:32:39.979604   32908 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0213 22:32:39.979614   32908 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0213 22:32:39.979622   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:32:39.979634   32908 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0213 22:32:39.979655   32908 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0213 22:32:39.979676   32908 command_runner.go:130] > # the cgroup blockio controller.
	I0213 22:32:39.979683   32908 command_runner.go:130] > # blockio_config_file = ""
	I0213 22:32:39.979693   32908 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0213 22:32:39.979699   32908 command_runner.go:130] > # irqbalance daemon.
	I0213 22:32:39.979707   32908 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0213 22:32:39.979717   32908 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0213 22:32:39.979726   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:32:39.979734   32908 command_runner.go:130] > # rdt_config_file = ""
	I0213 22:32:39.979746   32908 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0213 22:32:39.979755   32908 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0213 22:32:39.979766   32908 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0213 22:32:39.979784   32908 command_runner.go:130] > # separate_pull_cgroup = ""
	I0213 22:32:39.979793   32908 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0213 22:32:39.979802   32908 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0213 22:32:39.979811   32908 command_runner.go:130] > # will be added.
	I0213 22:32:39.979819   32908 command_runner.go:130] > # default_capabilities = [
	I0213 22:32:39.979829   32908 command_runner.go:130] > # 	"CHOWN",
	I0213 22:32:39.979835   32908 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0213 22:32:39.979841   32908 command_runner.go:130] > # 	"FSETID",
	I0213 22:32:39.979847   32908 command_runner.go:130] > # 	"FOWNER",
	I0213 22:32:39.979852   32908 command_runner.go:130] > # 	"SETGID",
	I0213 22:32:39.979859   32908 command_runner.go:130] > # 	"SETUID",
	I0213 22:32:39.979868   32908 command_runner.go:130] > # 	"SETPCAP",
	I0213 22:32:39.979875   32908 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0213 22:32:39.979885   32908 command_runner.go:130] > # 	"KILL",
	I0213 22:32:39.979893   32908 command_runner.go:130] > # ]
	I0213 22:32:39.979904   32908 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0213 22:32:39.979916   32908 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0213 22:32:39.979926   32908 command_runner.go:130] > # default_sysctls = [
	I0213 22:32:39.979931   32908 command_runner.go:130] > # ]
	I0213 22:32:39.979942   32908 command_runner.go:130] > # List of devices on the host that a
	I0213 22:32:39.979955   32908 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0213 22:32:39.979964   32908 command_runner.go:130] > # allowed_devices = [
	I0213 22:32:39.979971   32908 command_runner.go:130] > # 	"/dev/fuse",
	I0213 22:32:39.979977   32908 command_runner.go:130] > # ]
	I0213 22:32:39.979989   32908 command_runner.go:130] > # List of additional devices. specified as
	I0213 22:32:39.980004   32908 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0213 22:32:39.980018   32908 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0213 22:32:39.980042   32908 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0213 22:32:39.980061   32908 command_runner.go:130] > # additional_devices = [
	I0213 22:32:39.980067   32908 command_runner.go:130] > # ]
	I0213 22:32:39.980074   32908 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0213 22:32:39.980081   32908 command_runner.go:130] > # cdi_spec_dirs = [
	I0213 22:32:39.980087   32908 command_runner.go:130] > # 	"/etc/cdi",
	I0213 22:32:39.980097   32908 command_runner.go:130] > # 	"/var/run/cdi",
	I0213 22:32:39.980103   32908 command_runner.go:130] > # ]
	I0213 22:32:39.980118   32908 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0213 22:32:39.980131   32908 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0213 22:32:39.980141   32908 command_runner.go:130] > # Defaults to false.
	I0213 22:32:39.980152   32908 command_runner.go:130] > # device_ownership_from_security_context = false
	I0213 22:32:39.980165   32908 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0213 22:32:39.980176   32908 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0213 22:32:39.980182   32908 command_runner.go:130] > # hooks_dir = [
	I0213 22:32:39.980189   32908 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0213 22:32:39.980193   32908 command_runner.go:130] > # ]
	I0213 22:32:39.980199   32908 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0213 22:32:39.980208   32908 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0213 22:32:39.980216   32908 command_runner.go:130] > # its default mounts from the following two files:
	I0213 22:32:39.980219   32908 command_runner.go:130] > #
	I0213 22:32:39.980226   32908 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0213 22:32:39.980234   32908 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0213 22:32:39.980240   32908 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0213 22:32:39.980245   32908 command_runner.go:130] > #
	I0213 22:32:39.980251   32908 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0213 22:32:39.980260   32908 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0213 22:32:39.980266   32908 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0213 22:32:39.980273   32908 command_runner.go:130] > #      only add mounts it finds in this file.
	I0213 22:32:39.980277   32908 command_runner.go:130] > #
	I0213 22:32:39.980281   32908 command_runner.go:130] > # default_mounts_file = ""
	I0213 22:32:39.980289   32908 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0213 22:32:39.980295   32908 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0213 22:32:39.980302   32908 command_runner.go:130] > pids_limit = 1024
	I0213 22:32:39.980308   32908 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0213 22:32:39.980314   32908 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0213 22:32:39.980322   32908 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0213 22:32:39.980331   32908 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0213 22:32:39.980337   32908 command_runner.go:130] > # log_size_max = -1
	I0213 22:32:39.980344   32908 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0213 22:32:39.980350   32908 command_runner.go:130] > # log_to_journald = false
	I0213 22:32:39.980356   32908 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0213 22:32:39.980363   32908 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0213 22:32:39.980369   32908 command_runner.go:130] > # Path to directory for container attach sockets.
	I0213 22:32:39.980376   32908 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0213 22:32:39.980384   32908 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0213 22:32:39.980391   32908 command_runner.go:130] > # bind_mount_prefix = ""
	I0213 22:32:39.980398   32908 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0213 22:32:39.980404   32908 command_runner.go:130] > # read_only = false
	I0213 22:32:39.980410   32908 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0213 22:32:39.980418   32908 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0213 22:32:39.980423   32908 command_runner.go:130] > # live configuration reload.
	I0213 22:32:39.980429   32908 command_runner.go:130] > # log_level = "info"
	I0213 22:32:39.980435   32908 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0213 22:32:39.980442   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:32:39.980446   32908 command_runner.go:130] > # log_filter = ""
	I0213 22:32:39.980454   32908 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0213 22:32:39.980461   32908 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0213 22:32:39.980467   32908 command_runner.go:130] > # separated by comma.
	I0213 22:32:39.980472   32908 command_runner.go:130] > # uid_mappings = ""
	I0213 22:32:39.980479   32908 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0213 22:32:39.980486   32908 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0213 22:32:39.980493   32908 command_runner.go:130] > # separated by comma.
	I0213 22:32:39.980497   32908 command_runner.go:130] > # gid_mappings = ""
	I0213 22:32:39.980505   32908 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0213 22:32:39.980513   32908 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0213 22:32:39.980521   32908 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0213 22:32:39.980527   32908 command_runner.go:130] > # minimum_mappable_uid = -1
	I0213 22:32:39.980533   32908 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0213 22:32:39.980541   32908 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0213 22:32:39.980549   32908 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0213 22:32:39.980555   32908 command_runner.go:130] > # minimum_mappable_gid = -1
	I0213 22:32:39.980561   32908 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0213 22:32:39.980569   32908 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0213 22:32:39.980577   32908 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0213 22:32:39.980581   32908 command_runner.go:130] > # ctr_stop_timeout = 30
	I0213 22:32:39.980593   32908 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0213 22:32:39.980601   32908 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0213 22:32:39.980608   32908 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0213 22:32:39.980613   32908 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0213 22:32:39.980621   32908 command_runner.go:130] > drop_infra_ctr = false
	I0213 22:32:39.980629   32908 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0213 22:32:39.980637   32908 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0213 22:32:39.980646   32908 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0213 22:32:39.980652   32908 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0213 22:32:39.980658   32908 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0213 22:32:39.980665   32908 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0213 22:32:39.980669   32908 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0213 22:32:39.980678   32908 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0213 22:32:39.980685   32908 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0213 22:32:39.980691   32908 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0213 22:32:39.980699   32908 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0213 22:32:39.980707   32908 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0213 22:32:39.980714   32908 command_runner.go:130] > # default_runtime = "runc"
	I0213 22:32:39.980719   32908 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0213 22:32:39.980729   32908 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0213 22:32:39.980740   32908 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0213 22:32:39.980747   32908 command_runner.go:130] > # creation as a file is not desired either.
	I0213 22:32:39.980757   32908 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0213 22:32:39.980763   32908 command_runner.go:130] > # the hostname is being managed dynamically.
	I0213 22:32:39.980769   32908 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0213 22:32:39.980773   32908 command_runner.go:130] > # ]
	I0213 22:32:39.980781   32908 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0213 22:32:39.980788   32908 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0213 22:32:39.980796   32908 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0213 22:32:39.980805   32908 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0213 22:32:39.980810   32908 command_runner.go:130] > #
	I0213 22:32:39.980815   32908 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0213 22:32:39.980822   32908 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0213 22:32:39.980826   32908 command_runner.go:130] > #  runtime_type = "oci"
	I0213 22:32:39.980833   32908 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0213 22:32:39.980838   32908 command_runner.go:130] > #  privileged_without_host_devices = false
	I0213 22:32:39.980845   32908 command_runner.go:130] > #  allowed_annotations = []
	I0213 22:32:39.980848   32908 command_runner.go:130] > # Where:
	I0213 22:32:39.980855   32908 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0213 22:32:39.980863   32908 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0213 22:32:39.980871   32908 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0213 22:32:39.980880   32908 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0213 22:32:39.980886   32908 command_runner.go:130] > #   in $PATH.
	I0213 22:32:39.980895   32908 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0213 22:32:39.980906   32908 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0213 22:32:39.980919   32908 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0213 22:32:39.980929   32908 command_runner.go:130] > #   state.
	I0213 22:32:39.980939   32908 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0213 22:32:39.980952   32908 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0213 22:32:39.980966   32908 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0213 22:32:39.980977   32908 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0213 22:32:39.980986   32908 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0213 22:32:39.980995   32908 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0213 22:32:39.981002   32908 command_runner.go:130] > #   The currently recognized values are:
	I0213 22:32:39.981008   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0213 22:32:39.981017   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0213 22:32:39.981027   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0213 22:32:39.981041   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0213 22:32:39.981057   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0213 22:32:39.981072   32908 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0213 22:32:39.981084   32908 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0213 22:32:39.981095   32908 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0213 22:32:39.981101   32908 command_runner.go:130] > #   should be moved to the container's cgroup
	I0213 22:32:39.981106   32908 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0213 22:32:39.981114   32908 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0213 22:32:39.981118   32908 command_runner.go:130] > runtime_type = "oci"
	I0213 22:32:39.981123   32908 command_runner.go:130] > runtime_root = "/run/runc"
	I0213 22:32:39.981128   32908 command_runner.go:130] > runtime_config_path = ""
	I0213 22:32:39.981138   32908 command_runner.go:130] > monitor_path = ""
	I0213 22:32:39.981148   32908 command_runner.go:130] > monitor_cgroup = ""
	I0213 22:32:39.981156   32908 command_runner.go:130] > monitor_exec_cgroup = ""
	I0213 22:32:39.981170   32908 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0213 22:32:39.981180   32908 command_runner.go:130] > # running containers
	I0213 22:32:39.981188   32908 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0213 22:32:39.981201   32908 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0213 22:32:39.981231   32908 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0213 22:32:39.981245   32908 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0213 22:32:39.981257   32908 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0213 22:32:39.981266   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0213 22:32:39.981277   32908 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0213 22:32:39.981288   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0213 22:32:39.981297   32908 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0213 22:32:39.981307   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0213 22:32:39.981322   32908 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0213 22:32:39.981331   32908 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0213 22:32:39.981341   32908 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0213 22:32:39.981358   32908 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0213 22:32:39.981374   32908 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0213 22:32:39.981387   32908 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0213 22:32:39.981405   32908 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0213 22:32:39.981418   32908 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0213 22:32:39.981430   32908 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0213 22:32:39.981446   32908 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0213 22:32:39.981455   32908 command_runner.go:130] > # Example:
	I0213 22:32:39.981464   32908 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0213 22:32:39.981475   32908 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0213 22:32:39.981486   32908 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0213 22:32:39.981498   32908 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0213 22:32:39.981506   32908 command_runner.go:130] > # cpuset = 0
	I0213 22:32:39.981514   32908 command_runner.go:130] > # cpushares = "0-1"
	I0213 22:32:39.981523   32908 command_runner.go:130] > # Where:
	I0213 22:32:39.981534   32908 command_runner.go:130] > # The workload name is workload-type.
	I0213 22:32:39.981559   32908 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0213 22:32:39.981573   32908 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0213 22:32:39.981586   32908 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0213 22:32:39.981605   32908 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0213 22:32:39.981617   32908 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0213 22:32:39.981625   32908 command_runner.go:130] > # 
	I0213 22:32:39.981640   32908 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0213 22:32:39.981649   32908 command_runner.go:130] > #
	I0213 22:32:39.981662   32908 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0213 22:32:39.981676   32908 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0213 22:32:39.981690   32908 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0213 22:32:39.981703   32908 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0213 22:32:39.981713   32908 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0213 22:32:39.981722   32908 command_runner.go:130] > [crio.image]
	I0213 22:32:39.981735   32908 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0213 22:32:39.981746   32908 command_runner.go:130] > # default_transport = "docker://"
	I0213 22:32:39.981759   32908 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0213 22:32:39.981773   32908 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0213 22:32:39.981783   32908 command_runner.go:130] > # global_auth_file = ""
	I0213 22:32:39.981794   32908 command_runner.go:130] > # The image used to instantiate infra containers.
	I0213 22:32:39.981802   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:32:39.981814   32908 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0213 22:32:39.981828   32908 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0213 22:32:39.981841   32908 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0213 22:32:39.981852   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:32:39.981863   32908 command_runner.go:130] > # pause_image_auth_file = ""
	I0213 22:32:39.981892   32908 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0213 22:32:39.981905   32908 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0213 22:32:39.981918   32908 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0213 22:32:39.981931   32908 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0213 22:32:39.981941   32908 command_runner.go:130] > # pause_command = "/pause"
	I0213 22:32:39.981954   32908 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0213 22:32:39.981964   32908 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0213 22:32:39.981978   32908 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0213 22:32:39.981991   32908 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0213 22:32:39.982004   32908 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0213 22:32:39.982014   32908 command_runner.go:130] > # signature_policy = ""
	I0213 22:32:39.982027   32908 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0213 22:32:39.982040   32908 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0213 22:32:39.982050   32908 command_runner.go:130] > # changing them here.
	I0213 22:32:39.982058   32908 command_runner.go:130] > # insecure_registries = [
	I0213 22:32:39.982063   32908 command_runner.go:130] > # ]
	I0213 22:32:39.982080   32908 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0213 22:32:39.982092   32908 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0213 22:32:39.982103   32908 command_runner.go:130] > # image_volumes = "mkdir"
	I0213 22:32:39.982114   32908 command_runner.go:130] > # Temporary directory to use for storing big files
	I0213 22:32:39.982126   32908 command_runner.go:130] > # big_files_temporary_dir = ""
	I0213 22:32:39.982139   32908 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0213 22:32:39.982149   32908 command_runner.go:130] > # CNI plugins.
	I0213 22:32:39.982158   32908 command_runner.go:130] > [crio.network]
	I0213 22:32:39.982168   32908 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0213 22:32:39.982181   32908 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0213 22:32:39.982191   32908 command_runner.go:130] > # cni_default_network = ""
	I0213 22:32:39.982201   32908 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0213 22:32:39.982211   32908 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0213 22:32:39.982224   32908 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0213 22:32:39.982234   32908 command_runner.go:130] > # plugin_dirs = [
	I0213 22:32:39.982244   32908 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0213 22:32:39.982253   32908 command_runner.go:130] > # ]
	I0213 22:32:39.982265   32908 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0213 22:32:39.982273   32908 command_runner.go:130] > [crio.metrics]
	I0213 22:32:39.982279   32908 command_runner.go:130] > # Globally enable or disable metrics support.
	I0213 22:32:39.982289   32908 command_runner.go:130] > enable_metrics = true
	I0213 22:32:39.982300   32908 command_runner.go:130] > # Specify enabled metrics collectors.
	I0213 22:32:39.982312   32908 command_runner.go:130] > # Per default all metrics are enabled.
	I0213 22:32:39.982326   32908 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0213 22:32:39.982339   32908 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0213 22:32:39.982353   32908 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0213 22:32:39.982362   32908 command_runner.go:130] > # metrics_collectors = [
	I0213 22:32:39.982369   32908 command_runner.go:130] > # 	"operations",
	I0213 22:32:39.982377   32908 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0213 22:32:39.982389   32908 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0213 22:32:39.982400   32908 command_runner.go:130] > # 	"operations_errors",
	I0213 22:32:39.982411   32908 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0213 22:32:39.982422   32908 command_runner.go:130] > # 	"image_pulls_by_name",
	I0213 22:32:39.982433   32908 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0213 22:32:39.982444   32908 command_runner.go:130] > # 	"image_pulls_failures",
	I0213 22:32:39.982454   32908 command_runner.go:130] > # 	"image_pulls_successes",
	I0213 22:32:39.982464   32908 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0213 22:32:39.982471   32908 command_runner.go:130] > # 	"image_layer_reuse",
	I0213 22:32:39.982480   32908 command_runner.go:130] > # 	"containers_oom_total",
	I0213 22:32:39.982490   32908 command_runner.go:130] > # 	"containers_oom",
	I0213 22:32:39.982502   32908 command_runner.go:130] > # 	"processes_defunct",
	I0213 22:32:39.982513   32908 command_runner.go:130] > # 	"operations_total",
	I0213 22:32:39.982524   32908 command_runner.go:130] > # 	"operations_latency_seconds",
	I0213 22:32:39.982535   32908 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0213 22:32:39.982545   32908 command_runner.go:130] > # 	"operations_errors_total",
	I0213 22:32:39.982556   32908 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0213 22:32:39.982567   32908 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0213 22:32:39.982575   32908 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0213 22:32:39.982585   32908 command_runner.go:130] > # 	"image_pulls_success_total",
	I0213 22:32:39.982600   32908 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0213 22:32:39.982611   32908 command_runner.go:130] > # 	"containers_oom_count_total",
	I0213 22:32:39.982620   32908 command_runner.go:130] > # ]
	I0213 22:32:39.982631   32908 command_runner.go:130] > # The port on which the metrics server will listen.
	I0213 22:32:39.982641   32908 command_runner.go:130] > # metrics_port = 9090
	I0213 22:32:39.982653   32908 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0213 22:32:39.982662   32908 command_runner.go:130] > # metrics_socket = ""
	I0213 22:32:39.982671   32908 command_runner.go:130] > # The certificate for the secure metrics server.
	I0213 22:32:39.982684   32908 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0213 22:32:39.982698   32908 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0213 22:32:39.982710   32908 command_runner.go:130] > # certificate on any modification event.
	I0213 22:32:39.982720   32908 command_runner.go:130] > # metrics_cert = ""
	I0213 22:32:39.982733   32908 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0213 22:32:39.982744   32908 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0213 22:32:39.982754   32908 command_runner.go:130] > # metrics_key = ""
	I0213 22:32:39.982765   32908 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0213 22:32:39.982773   32908 command_runner.go:130] > [crio.tracing]
	I0213 22:32:39.982786   32908 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0213 22:32:39.982796   32908 command_runner.go:130] > # enable_tracing = false
	I0213 22:32:39.982806   32908 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0213 22:32:39.982817   32908 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0213 22:32:39.982829   32908 command_runner.go:130] > # Number of samples to collect per million spans.
	I0213 22:32:39.982840   32908 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0213 22:32:39.982854   32908 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0213 22:32:39.982863   32908 command_runner.go:130] > [crio.stats]
	I0213 22:32:39.982872   32908 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0213 22:32:39.982884   32908 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0213 22:32:39.982898   32908 command_runner.go:130] > # stats_collection_period = 0
	I0213 22:32:39.982972   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:32:39.982984   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:32:39.982994   32908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 22:32:39.983020   32908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-413653 NodeName:multinode-413653-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 22:32:39.983162   32908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-413653-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 22:32:39.983218   32908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-413653-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 22:32:39.983277   32908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 22:32:39.993744   32908 command_runner.go:130] > kubeadm
	I0213 22:32:39.993765   32908 command_runner.go:130] > kubectl
	I0213 22:32:39.993770   32908 command_runner.go:130] > kubelet
	I0213 22:32:39.993801   32908 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 22:32:39.993890   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0213 22:32:40.003490   32908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0213 22:32:40.022110   32908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 22:32:40.039598   32908 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0213 22:32:40.043393   32908 command_runner.go:130] > 192.168.39.81	control-plane.minikube.internal
	I0213 22:32:40.043458   32908 host.go:66] Checking if "multinode-413653" exists ...
	I0213 22:32:40.043728   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:32:40.043891   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:32:40.043923   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:32:40.058277   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34597
	I0213 22:32:40.058695   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:32:40.059153   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:32:40.059175   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:32:40.059477   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:32:40.059656   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:32:40.059837   32908 start.go:304] JoinCluster: &{Name:multinode-413653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:32:40.059992   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0213 22:32:40.060011   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:32:40.062982   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:32:40.063450   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:32:40.063478   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:32:40.063630   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:32:40.063777   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:32:40.063962   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:32:40.064101   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:32:40.254804   32908 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 78i0c1.a4c0s2fhiipv91r5 --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 22:32:40.254987   32908 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0213 22:32:40.255028   32908 host.go:66] Checking if "multinode-413653" exists ...
	I0213 22:32:40.255345   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:32:40.255383   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:32:40.271007   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43879
	I0213 22:32:40.271473   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:32:40.271946   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:32:40.271976   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:32:40.272353   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:32:40.272576   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:32:40.272794   32908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-413653-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0213 22:32:40.272825   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:32:40.275746   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:32:40.276236   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:32:40.276261   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:32:40.276428   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:32:40.276607   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:32:40.276785   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:32:40.276944   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:32:40.481693   32908 command_runner.go:130] > node/multinode-413653-m02 cordoned
	I0213 22:32:43.525473   32908 command_runner.go:130] > pod "busybox-5b5d89c9d6-w6ghx" has DeletionTimestamp older than 1 seconds, skipping
	I0213 22:32:43.525519   32908 command_runner.go:130] > node/multinode-413653-m02 drained
	I0213 22:32:43.527266   32908 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0213 22:32:43.527290   32908 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-4m5lx, kube-system/kube-proxy-26ww9
	I0213 22:32:43.527323   32908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-413653-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.25450185s)
	I0213 22:32:43.527350   32908 node.go:108] successfully drained node "m02"
	I0213 22:32:43.527699   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:32:43.527943   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:32:43.528311   32908 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0213 22:32:43.528365   32908 round_trippers.go:463] DELETE https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:43.528376   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:43.528388   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:43.528398   32908 round_trippers.go:473]     Content-Type: application/json
	I0213 22:32:43.528407   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:43.543438   32908 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0213 22:32:43.543460   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:43.543467   32908 round_trippers.go:580]     Audit-Id: 95b109b9-3b40-4e80-8c3b-96feaefdd3ce
	I0213 22:32:43.543472   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:43.543477   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:43.543482   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:43.543487   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:43.543492   32908 round_trippers.go:580]     Content-Length: 171
	I0213 22:32:43.543497   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:43 GMT
	I0213 22:32:43.543514   32908 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-413653-m02","kind":"nodes","uid":"e15d93ce-6cc1-4cb6-8e3a-d3d69862c7a4"}}
	I0213 22:32:43.543534   32908 node.go:124] successfully deleted node "m02"
	I0213 22:32:43.543543   32908 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0213 22:32:43.543561   32908 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0213 22:32:43.543595   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 78i0c1.a4c0s2fhiipv91r5 --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-413653-m02"
	I0213 22:32:43.607128   32908 command_runner.go:130] ! W0213 22:32:43.599030    2663 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0213 22:32:43.607228   32908 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0213 22:32:43.773661   32908 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0213 22:32:43.773695   32908 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0213 22:32:44.548424   32908 command_runner.go:130] > [preflight] Running pre-flight checks
	I0213 22:32:44.548504   32908 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0213 22:32:44.548522   32908 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0213 22:32:44.548535   32908 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 22:32:44.548547   32908 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 22:32:44.548556   32908 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0213 22:32:44.548566   32908 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0213 22:32:44.548572   32908 command_runner.go:130] > This node has joined the cluster:
	I0213 22:32:44.548579   32908 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0213 22:32:44.548584   32908 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0213 22:32:44.548590   32908 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0213 22:32:44.548606   32908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 78i0c1.a4c0s2fhiipv91r5 --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-413653-m02": (1.004997122s)
	I0213 22:32:44.548628   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0213 22:32:44.858100   32908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=multinode-413653 minikube.k8s.io/updated_at=2024_02_13T22_32_44_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:32:44.972704   32908 command_runner.go:130] > node/multinode-413653-m02 labeled
	I0213 22:32:44.987187   32908 command_runner.go:130] > node/multinode-413653-m03 labeled
	I0213 22:32:44.989930   32908 start.go:306] JoinCluster complete in 4.930087932s
	I0213 22:32:44.989975   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:32:44.989984   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:32:44.990043   32908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0213 22:32:44.995712   32908 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0213 22:32:44.995741   32908 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0213 22:32:44.995753   32908 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0213 22:32:44.995760   32908 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0213 22:32:44.995776   32908 command_runner.go:130] > Access: 2024-02-13 22:30:12.772425470 +0000
	I0213 22:32:44.995782   32908 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0213 22:32:44.995786   32908 command_runner.go:130] > Change: 2024-02-13 22:30:10.912425470 +0000
	I0213 22:32:44.995792   32908 command_runner.go:130] >  Birth: -
	I0213 22:32:44.995911   32908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0213 22:32:44.995933   32908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0213 22:32:45.019421   32908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0213 22:32:45.389803   32908 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0213 22:32:45.389834   32908 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0213 22:32:45.389843   32908 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0213 22:32:45.389850   32908 command_runner.go:130] > daemonset.apps/kindnet configured
	I0213 22:32:45.390261   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:32:45.390474   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:32:45.390772   32908 round_trippers.go:463] GET https://192.168.39.81:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0213 22:32:45.390787   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.390794   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.390800   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.393648   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.393672   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.393680   32908 round_trippers.go:580]     Audit-Id: e00902d7-9456-46a5-a2f0-bdee5bb6ce90
	I0213 22:32:45.393685   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.393690   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.393695   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.393700   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.393708   32908 round_trippers.go:580]     Content-Length: 291
	I0213 22:32:45.393713   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.393734   32908 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eccb91db-2bff-44e5-a49d-713d6c3d3d2b","resourceVersion":"856","creationTimestamp":"2024-02-13T22:20:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0213 22:32:45.393813   32908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-413653" context rescaled to 1 replicas
	I0213 22:32:45.393841   32908 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0213 22:32:45.395989   32908 out.go:177] * Verifying Kubernetes components...
	I0213 22:32:45.397481   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:32:45.412806   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:32:45.413148   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:32:45.413471   32908 node_ready.go:35] waiting up to 6m0s for node "multinode-413653-m02" to be "Ready" ...
	I0213 22:32:45.413567   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:45.413579   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.413595   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.413609   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.417212   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:32:45.417235   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.417247   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.417253   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.417262   32908 round_trippers.go:580]     Audit-Id: 7972cac5-92fe-4a0c-b01b-207539b788fe
	I0213 22:32:45.417267   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.417273   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.417278   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.417591   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"1e9c4839-34c8-4278-ae96-8c649be816a3","resourceVersion":"1005","creationTimestamp":"2024-02-13T22:32:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_32_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:32:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0213 22:32:45.417961   32908 node_ready.go:49] node "multinode-413653-m02" has status "Ready":"True"
	I0213 22:32:45.417984   32908 node_ready.go:38] duration metric: took 4.492687ms waiting for node "multinode-413653-m02" to be "Ready" ...
	I0213 22:32:45.417996   32908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:32:45.418068   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:32:45.418084   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.418095   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.418117   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.423046   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:32:45.423073   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.423084   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.423091   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.423098   32908 round_trippers.go:580]     Audit-Id: 5ea46851-e3a6-4c3e-9a3e-66e7a5b5114f
	I0213 22:32:45.423105   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.423114   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.423121   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.424253   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1012"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82198 chars]
	I0213 22:32:45.426939   32908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.427054   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:32:45.427066   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.427077   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.427086   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.432200   32908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0213 22:32:45.432227   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.432235   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.432244   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.432252   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.432261   32908 round_trippers.go:580]     Audit-Id: 81b418e8-1810-4538-8620-bce797395450
	I0213 22:32:45.432270   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.432279   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.432614   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0213 22:32:45.433208   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:45.433232   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.433242   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.433254   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.435869   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.435893   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.435904   32908 round_trippers.go:580]     Audit-Id: 7fc3dead-4f1c-49b4-89c5-6b728826580b
	I0213 22:32:45.435912   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.435919   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.435927   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.435934   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.435941   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.436116   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:32:45.436525   32908 pod_ready.go:92] pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:45.436546   32908 pod_ready.go:81] duration metric: took 9.578802ms waiting for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.436559   32908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.436633   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-413653
	I0213 22:32:45.436645   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.436655   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.436667   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.442698   32908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0213 22:32:45.442727   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.442738   32908 round_trippers.go:580]     Audit-Id: f4607177-b4d6-4cc8-8298-81eab1e16c07
	I0213 22:32:45.442747   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.442777   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.442812   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.442828   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.442836   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.443037   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-413653","namespace":"kube-system","uid":"6adf5771-f03b-47ca-ad97-384b664fb8ab","resourceVersion":"833","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.81:2379","kubernetes.io/config.hash":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.mirror":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.seen":"2024-02-13T22:20:28.219611587Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0213 22:32:45.443539   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:45.443560   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.443571   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.443580   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.446191   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.446216   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.446227   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.446242   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.446254   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.446262   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.446273   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.446282   32908 round_trippers.go:580]     Audit-Id: b3508449-3613-41ed-b244-74c53f562d07
	I0213 22:32:45.446468   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:32:45.446829   32908 pod_ready.go:92] pod "etcd-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:45.446847   32908 pod_ready.go:81] duration metric: took 10.281413ms waiting for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.446874   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.446937   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:32:45.446948   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.446958   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.446968   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.449631   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.449654   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.449665   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.449673   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.449681   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.449690   32908 round_trippers.go:580]     Audit-Id: 9a190770-1ceb-46bc-89a6-142a5d8f4169
	I0213 22:32:45.449697   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.449706   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.450327   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"860","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0213 22:32:45.450770   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:45.450786   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.450796   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.450805   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.453427   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.453453   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.453463   32908 round_trippers.go:580]     Audit-Id: 5b7d124d-be56-46bc-aa22-153d11dc671c
	I0213 22:32:45.453472   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.453480   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.453488   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.453496   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.453507   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.453759   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:32:45.454196   32908 pod_ready.go:92] pod "kube-apiserver-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:45.454222   32908 pod_ready.go:81] duration metric: took 7.33759ms waiting for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.454236   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.454313   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-413653
	I0213 22:32:45.454326   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.454336   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.454345   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.457105   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.457130   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.457140   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.457148   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.457156   32908 round_trippers.go:580]     Audit-Id: cec36a3f-b8b8-47c9-ba79-7816c5763d08
	I0213 22:32:45.457164   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.457171   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.457179   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.457337   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-413653","namespace":"kube-system","uid":"1d3432c0-f2cd-4371-9599-9a119dc1a8ab","resourceVersion":"835","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.mirror":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.seen":"2024-02-13T22:20:28.219615864Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0213 22:32:45.457856   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:45.457891   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.457903   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.457912   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.466019   32908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0213 22:32:45.466046   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.466057   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.466067   32908 round_trippers.go:580]     Audit-Id: 8f691b50-c2e7-488f-a0a6-eae5191db711
	I0213 22:32:45.466076   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.466085   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.466093   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.466104   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.466259   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:32:45.466697   32908 pod_ready.go:92] pod "kube-controller-manager-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:45.466719   32908 pod_ready.go:81] duration metric: took 12.471691ms waiting for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.466733   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:45.614181   32908 request.go:629] Waited for 147.34951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:32:45.614248   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:32:45.614256   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.614268   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.614281   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.616876   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.616905   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.616914   32908 round_trippers.go:580]     Audit-Id: 3dcbcd04-4171-420f-8bdc-fa02605b9f7b
	I0213 22:32:45.616922   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.616935   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.616943   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.616951   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.616960   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.617158   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-26ww9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b00e8eb-8829-460d-a162-7fe8c783c260","resourceVersion":"1010","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0213 22:32:45.813969   32908 request.go:629] Waited for 196.376231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:45.814038   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:45.814043   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:45.814050   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:45.814059   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:45.816831   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:45.816861   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:45.816870   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:45.816875   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:45.816881   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:45 GMT
	I0213 22:32:45.816887   32908 round_trippers.go:580]     Audit-Id: d66c36dc-363a-4dc3-9975-a6314795ee0e
	I0213 22:32:45.816896   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:45.816905   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:45.817064   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"1e9c4839-34c8-4278-ae96-8c649be816a3","resourceVersion":"1005","creationTimestamp":"2024-02-13T22:32:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_32_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:32:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0213 22:32:46.014631   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:32:46.014655   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:46.014664   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:46.014670   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:46.017476   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:46.017505   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:46.017515   32908 round_trippers.go:580]     Audit-Id: 294abe33-f144-4303-abfc-1eae0e3af8ea
	I0213 22:32:46.017522   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:46.017530   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:46.017537   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:46.017545   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:46.017552   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:46 GMT
	I0213 22:32:46.017757   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-26ww9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b00e8eb-8829-460d-a162-7fe8c783c260","resourceVersion":"1010","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I0213 22:32:46.213603   32908 request.go:629] Waited for 195.34274ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:46.213703   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:46.213722   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:46.213733   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:46.213741   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:46.216818   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:32:46.216849   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:46.216859   32908 round_trippers.go:580]     Audit-Id: ea53ae4a-2630-40bd-bb98-0360c0f336c5
	I0213 22:32:46.216868   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:46.216876   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:46.216906   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:46.216919   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:46.216929   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:46 GMT
	I0213 22:32:46.217045   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"1e9c4839-34c8-4278-ae96-8c649be816a3","resourceVersion":"1005","creationTimestamp":"2024-02-13T22:32:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_32_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:32:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0213 22:32:46.467351   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:32:46.467377   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:46.467385   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:46.467391   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:46.470347   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:46.470373   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:46.470380   32908 round_trippers.go:580]     Audit-Id: 11816542-bb2c-4aae-977d-7ef4a75395c2
	I0213 22:32:46.470386   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:46.470391   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:46.470397   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:46.470402   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:46.470407   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:46 GMT
	I0213 22:32:46.470617   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-26ww9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b00e8eb-8829-460d-a162-7fe8c783c260","resourceVersion":"1026","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0213 22:32:46.614118   32908 request.go:629] Waited for 143.090181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:46.614203   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:32:46.614213   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:46.614223   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:46.614233   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:46.616866   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:46.616894   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:46.616904   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:46.616912   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:46.616919   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:46.616928   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:46 GMT
	I0213 22:32:46.616939   32908 round_trippers.go:580]     Audit-Id: b9bc52fa-99a6-42be-80ec-fdaeb21c77f8
	I0213 22:32:46.616948   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:46.617121   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"1e9c4839-34c8-4278-ae96-8c649be816a3","resourceVersion":"1005","creationTimestamp":"2024-02-13T22:32:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_32_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:32:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0213 22:32:46.617399   32908 pod_ready.go:92] pod "kube-proxy-26ww9" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:46.617418   32908 pod_ready.go:81] duration metric: took 1.150671714s waiting for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:46.617431   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:46.813964   32908 request.go:629] Waited for 196.467875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:32:46.814023   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:32:46.814028   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:46.814035   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:46.814041   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:46.816886   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:46.816916   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:46.816926   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:46.816933   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:46.816940   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:46 GMT
	I0213 22:32:46.816947   32908 round_trippers.go:580]     Audit-Id: b945adc7-5959-4378-9e38-339d29891349
	I0213 22:32:46.816955   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:46.816963   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:46.817181   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h5bvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7a12109-66cd-41a9-b7e7-4e53a27a4ca7","resourceVersion":"801","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0213 22:32:47.014080   32908 request.go:629] Waited for 196.391914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:47.014139   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:47.014144   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:47.014151   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:47.014157   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:47.016842   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:47.016871   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:47.016881   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:47.016887   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:47.016895   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:47.016900   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:47.016905   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:47 GMT
	I0213 22:32:47.016911   32908 round_trippers.go:580]     Audit-Id: 739d94bd-c308-4a9e-a8ce-cabf16ea11b5
	I0213 22:32:47.017085   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:32:47.017382   32908 pod_ready.go:92] pod "kube-proxy-h5bvp" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:47.017397   32908 pod_ready.go:81] duration metric: took 399.959812ms waiting for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:47.017406   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:47.214577   32908 request.go:629] Waited for 197.113843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:32:47.214693   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:32:47.214704   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:47.214712   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:47.214718   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:47.217335   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:47.217354   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:47.217361   32908 round_trippers.go:580]     Audit-Id: b23ecb27-cc7c-4a20-a40a-141c652a6c73
	I0213 22:32:47.217367   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:47.217372   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:47.217381   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:47.217386   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:47.217392   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:47 GMT
	I0213 22:32:47.217807   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4ggx","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9fa1c43-43a7-4737-8b10-e5327e355e9a","resourceVersion":"687","creationTimestamp":"2024-02-13T22:22:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0213 22:32:47.414632   32908 request.go:629] Waited for 196.401732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:32:47.414713   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:32:47.414718   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:47.414726   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:47.414731   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:47.417295   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:47.417315   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:47.417321   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:47 GMT
	I0213 22:32:47.417328   32908 round_trippers.go:580]     Audit-Id: e311012a-1142-49a5-8982-55a6d87b074e
	I0213 22:32:47.417336   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:47.417345   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:47.417352   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:47.417359   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:47.417764   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m03","uid":"3fd11080-7896-4845-a0ac-96b51f08d0cd","resourceVersion":"1006","creationTimestamp":"2024-02-13T22:22:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_32_44_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0213 22:32:47.418073   32908 pod_ready.go:92] pod "kube-proxy-k4ggx" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:47.418090   32908 pod_ready.go:81] duration metric: took 400.679608ms waiting for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:47.418099   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:47.614266   32908 request.go:629] Waited for 196.092758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:32:47.614347   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:32:47.614353   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:47.614360   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:47.614366   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:47.617290   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:47.617312   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:47.617320   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:47.617326   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:47.617331   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:47 GMT
	I0213 22:32:47.617336   32908 round_trippers.go:580]     Audit-Id: 1fa14bb7-8fed-4019-9a6d-5b1e02a9e2c8
	I0213 22:32:47.617341   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:47.617347   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:47.617575   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-413653","namespace":"kube-system","uid":"08710d51-793f-4606-9075-b5ab7331893e","resourceVersion":"861","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.mirror":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.seen":"2024-02-13T22:20:28.219616670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0213 22:32:47.814427   32908 request.go:629] Waited for 196.388944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:47.814487   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:32:47.814492   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:47.814499   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:47.814506   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:47.817070   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:32:47.817090   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:47.817096   32908 round_trippers.go:580]     Audit-Id: bcb5fd5f-b070-4083-a817-972fe21fa98d
	I0213 22:32:47.817102   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:47.817107   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:47.817112   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:47.817117   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:47.817122   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:47 GMT
	I0213 22:32:47.817332   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:32:47.817738   32908 pod_ready.go:92] pod "kube-scheduler-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:32:47.817764   32908 pod_ready.go:81] duration metric: took 399.659658ms waiting for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:32:47.817775   32908 pod_ready.go:38] duration metric: took 2.399767653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:32:47.817788   32908 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 22:32:47.817832   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:32:47.832310   32908 system_svc.go:56] duration metric: took 14.510083ms WaitForService to wait for kubelet.
	I0213 22:32:47.832341   32908 kubeadm.go:581] duration metric: took 2.438479439s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 22:32:47.832361   32908 node_conditions.go:102] verifying NodePressure condition ...
	I0213 22:32:48.013764   32908 request.go:629] Waited for 181.328348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes
	I0213 22:32:48.013838   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes
	I0213 22:32:48.013843   32908 round_trippers.go:469] Request Headers:
	I0213 22:32:48.013851   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:32:48.013862   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:32:48.016963   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:32:48.016990   32908 round_trippers.go:577] Response Headers:
	I0213 22:32:48.017001   32908 round_trippers.go:580]     Audit-Id: ff9acf2f-11a5-41ee-9da8-352fd5a37ee4
	I0213 22:32:48.017010   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:32:48.017016   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:32:48.017021   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:32:48.017026   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:32:48.017030   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:32:48 GMT
	I0213 22:32:48.017506   32908 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1030"},"items":[{"metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16209 chars]
	I0213 22:32:48.018143   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:32:48.018164   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:32:48.018175   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:32:48.018179   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:32:48.018185   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:32:48.018194   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:32:48.018199   32908 node_conditions.go:105] duration metric: took 185.833998ms to run NodePressure ...
	I0213 22:32:48.018219   32908 start.go:228] waiting for startup goroutines ...
	I0213 22:32:48.018242   32908 start.go:242] writing updated cluster config ...
	I0213 22:32:48.018678   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:32:48.018778   32908 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/config.json ...
	I0213 22:32:48.021463   32908 out.go:177] * Starting worker node multinode-413653-m03 in cluster multinode-413653
	I0213 22:32:48.022867   32908 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 22:32:48.022896   32908 cache.go:56] Caching tarball of preloaded images
	I0213 22:32:48.022999   32908 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 22:32:48.023012   32908 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 22:32:48.023118   32908 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/config.json ...
	I0213 22:32:48.023345   32908 start.go:365] acquiring machines lock for multinode-413653-m03: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 22:32:48.023394   32908 start.go:369] acquired machines lock for "multinode-413653-m03" in 26.232µs
	I0213 22:32:48.023415   32908 start.go:96] Skipping create...Using existing machine configuration
	I0213 22:32:48.023422   32908 fix.go:54] fixHost starting: m03
	I0213 22:32:48.023708   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:32:48.023732   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:32:48.037895   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0213 22:32:48.038345   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:32:48.038812   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:32:48.038838   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:32:48.039144   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:32:48.039335   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:32:48.039489   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetState
	I0213 22:32:48.041271   32908 fix.go:102] recreateIfNeeded on multinode-413653-m03: state=Running err=<nil>
	W0213 22:32:48.041299   32908 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 22:32:48.043340   32908 out.go:177] * Updating the running kvm2 "multinode-413653-m03" VM ...
	I0213 22:32:48.044802   32908 machine.go:88] provisioning docker machine ...
	I0213 22:32:48.044827   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:32:48.045097   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetMachineName
	I0213 22:32:48.045273   32908 buildroot.go:166] provisioning hostname "multinode-413653-m03"
	I0213 22:32:48.045297   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetMachineName
	I0213 22:32:48.045459   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:32:48.047819   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.048294   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:32:48.048326   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.048488   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:32:48.048700   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.048847   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.048983   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:32:48.049192   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:32:48.049506   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0213 22:32:48.049519   32908 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-413653-m03 && echo "multinode-413653-m03" | sudo tee /etc/hostname
	I0213 22:32:48.196296   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-413653-m03
	
	I0213 22:32:48.196331   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:32:48.199574   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.199964   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:32:48.199987   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.200167   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:32:48.200368   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.200521   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.200694   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:32:48.200868   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:32:48.201238   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0213 22:32:48.201268   32908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-413653-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-413653-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-413653-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 22:32:48.335243   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 22:32:48.335288   32908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 22:32:48.335310   32908 buildroot.go:174] setting up certificates
	I0213 22:32:48.335322   32908 provision.go:83] configureAuth start
	I0213 22:32:48.335336   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetMachineName
	I0213 22:32:48.335674   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetIP
	I0213 22:32:48.338610   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.339046   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:32:48.339070   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.339287   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:32:48.341707   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.342121   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:32:48.342161   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.342275   32908 provision.go:138] copyHostCerts
	I0213 22:32:48.342306   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:32:48.342338   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 22:32:48.342346   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 22:32:48.342409   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 22:32:48.342478   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:32:48.342498   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 22:32:48.342503   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 22:32:48.342526   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 22:32:48.342569   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:32:48.342592   32908 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 22:32:48.342598   32908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 22:32:48.342619   32908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 22:32:48.342663   32908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.multinode-413653-m03 san=[192.168.39.178 192.168.39.178 localhost 127.0.0.1 minikube multinode-413653-m03]
	I0213 22:32:48.513009   32908 provision.go:172] copyRemoteCerts
	I0213 22:32:48.513061   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 22:32:48.513088   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:32:48.515743   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.516137   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:32:48.516166   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.516337   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:32:48.516535   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.516715   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:32:48.516847   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m03/id_rsa Username:docker}
	I0213 22:32:48.612864   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0213 22:32:48.612942   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 22:32:48.637362   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0213 22:32:48.637435   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 22:32:48.661523   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0213 22:32:48.661611   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0213 22:32:48.684003   32908 provision.go:86] duration metric: configureAuth took 348.666112ms
	I0213 22:32:48.684030   32908 buildroot.go:189] setting minikube options for container-runtime
	I0213 22:32:48.684267   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:32:48.684347   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:32:48.687135   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.687586   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:32:48.687619   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:32:48.687789   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:32:48.687964   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.688124   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:32:48.688288   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:32:48.688440   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:32:48.688754   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0213 22:32:48.688769   32908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 22:34:19.360175   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 22:34:19.360208   32908 machine.go:91] provisioned docker machine in 1m31.315389918s
	I0213 22:34:19.360218   32908 start.go:300] post-start starting for "multinode-413653-m03" (driver="kvm2")
	I0213 22:34:19.360228   32908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 22:34:19.360246   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:34:19.360540   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 22:34:19.360568   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:34:19.363611   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.364058   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:34:19.364091   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.364309   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:34:19.364508   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:34:19.364632   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:34:19.364756   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m03/id_rsa Username:docker}
	I0213 22:34:19.461411   32908 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 22:34:19.465894   32908 command_runner.go:130] > NAME=Buildroot
	I0213 22:34:19.465924   32908 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0213 22:34:19.465931   32908 command_runner.go:130] > ID=buildroot
	I0213 22:34:19.465939   32908 command_runner.go:130] > VERSION_ID=2021.02.12
	I0213 22:34:19.465946   32908 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0213 22:34:19.466005   32908 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 22:34:19.466034   32908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 22:34:19.466128   32908 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 22:34:19.466217   32908 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 22:34:19.466228   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /etc/ssl/certs/162002.pem
	I0213 22:34:19.466334   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 22:34:19.475465   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:34:19.497827   32908 start.go:303] post-start completed in 137.594993ms
	I0213 22:34:19.497858   32908 fix.go:56] fixHost completed within 1m31.474435961s
	I0213 22:34:19.497904   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:34:19.500529   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.500883   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:34:19.500906   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.501081   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:34:19.501285   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:34:19.501476   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:34:19.501623   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:34:19.501808   32908 main.go:141] libmachine: Using SSH client type: native
	I0213 22:34:19.502136   32908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.178 22 <nil> <nil>}
	I0213 22:34:19.502150   32908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 22:34:19.635073   32908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707863659.617831848
	
	I0213 22:34:19.635105   32908 fix.go:206] guest clock: 1707863659.617831848
	I0213 22:34:19.635113   32908 fix.go:219] Guest: 2024-02-13 22:34:19.617831848 +0000 UTC Remote: 2024-02-13 22:34:19.497862812 +0000 UTC m=+557.944578124 (delta=119.969036ms)
	I0213 22:34:19.635128   32908 fix.go:190] guest clock delta is within tolerance: 119.969036ms
	I0213 22:34:19.635132   32908 start.go:83] releasing machines lock for "multinode-413653-m03", held for 1m31.611726037s
	I0213 22:34:19.635153   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:34:19.635392   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetIP
	I0213 22:34:19.638094   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.638517   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:34:19.638554   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.640613   32908 out.go:177] * Found network options:
	I0213 22:34:19.642108   32908 out.go:177]   - NO_PROXY=192.168.39.81,192.168.39.94
	W0213 22:34:19.643496   32908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0213 22:34:19.643527   32908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0213 22:34:19.643544   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:34:19.644221   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:34:19.644442   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .DriverName
	I0213 22:34:19.644553   32908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 22:34:19.644582   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	W0213 22:34:19.644656   32908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0213 22:34:19.644686   32908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0213 22:34:19.644753   32908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 22:34:19.644776   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHHostname
	I0213 22:34:19.647663   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.648042   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:34:19.648083   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.648129   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.648252   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:34:19.648476   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:34:19.648600   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:34:19.648625   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:19.648633   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:34:19.648767   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m03/id_rsa Username:docker}
	I0213 22:34:19.648803   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHPort
	I0213 22:34:19.648951   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHKeyPath
	I0213 22:34:19.649065   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetSSHUsername
	I0213 22:34:19.649188   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m03/id_rsa Username:docker}
	I0213 22:34:19.768689   32908 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0213 22:34:19.894655   32908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0213 22:34:19.901249   32908 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0213 22:34:19.901426   32908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 22:34:19.901503   32908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 22:34:19.911919   32908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0213 22:34:19.911954   32908 start.go:475] detecting cgroup driver to use...
	I0213 22:34:19.912039   32908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 22:34:19.928970   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 22:34:19.943554   32908 docker.go:217] disabling cri-docker service (if available) ...
	I0213 22:34:19.943627   32908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 22:34:19.961133   32908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 22:34:19.977654   32908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 22:34:20.142032   32908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 22:34:20.281895   32908 docker.go:233] disabling docker service ...
	I0213 22:34:20.281970   32908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 22:34:20.299519   32908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 22:34:20.313389   32908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 22:34:20.450927   32908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 22:34:20.590875   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 22:34:20.604451   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 22:34:20.622973   32908 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0213 22:34:20.623343   32908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 22:34:20.623409   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:34:20.636597   32908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 22:34:20.636657   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:34:20.647816   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:34:20.658304   32908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 22:34:20.669160   32908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 22:34:20.680378   32908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 22:34:20.690538   32908 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0213 22:34:20.690782   32908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 22:34:20.700100   32908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 22:34:20.836643   32908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 22:34:21.309207   32908 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 22:34:21.309294   32908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 22:34:21.315508   32908 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0213 22:34:21.315541   32908 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0213 22:34:21.315551   32908 command_runner.go:130] > Device: 16h/22d	Inode: 1162        Links: 1
	I0213 22:34:21.315562   32908 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0213 22:34:21.315570   32908 command_runner.go:130] > Access: 2024-02-13 22:34:21.209816707 +0000
	I0213 22:34:21.315580   32908 command_runner.go:130] > Modify: 2024-02-13 22:34:21.209816707 +0000
	I0213 22:34:21.315588   32908 command_runner.go:130] > Change: 2024-02-13 22:34:21.209816707 +0000
	I0213 22:34:21.315599   32908 command_runner.go:130] >  Birth: -
	I0213 22:34:21.316010   32908 start.go:543] Will wait 60s for crictl version
	I0213 22:34:21.316068   32908 ssh_runner.go:195] Run: which crictl
	I0213 22:34:21.320368   32908 command_runner.go:130] > /usr/bin/crictl
	I0213 22:34:21.320437   32908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 22:34:21.364822   32908 command_runner.go:130] > Version:  0.1.0
	I0213 22:34:21.364855   32908 command_runner.go:130] > RuntimeName:  cri-o
	I0213 22:34:21.364862   32908 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0213 22:34:21.364870   32908 command_runner.go:130] > RuntimeApiVersion:  v1
	I0213 22:34:21.365110   32908 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 22:34:21.365173   32908 ssh_runner.go:195] Run: crio --version
	I0213 22:34:21.415254   32908 command_runner.go:130] > crio version 1.24.1
	I0213 22:34:21.415276   32908 command_runner.go:130] > Version:          1.24.1
	I0213 22:34:21.415286   32908 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0213 22:34:21.415290   32908 command_runner.go:130] > GitTreeState:     dirty
	I0213 22:34:21.415296   32908 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0213 22:34:21.415301   32908 command_runner.go:130] > GoVersion:        go1.19.9
	I0213 22:34:21.415305   32908 command_runner.go:130] > Compiler:         gc
	I0213 22:34:21.415309   32908 command_runner.go:130] > Platform:         linux/amd64
	I0213 22:34:21.415314   32908 command_runner.go:130] > Linkmode:         dynamic
	I0213 22:34:21.415321   32908 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0213 22:34:21.415325   32908 command_runner.go:130] > SeccompEnabled:   true
	I0213 22:34:21.415330   32908 command_runner.go:130] > AppArmorEnabled:  false
	I0213 22:34:21.415404   32908 ssh_runner.go:195] Run: crio --version
	I0213 22:34:21.470307   32908 command_runner.go:130] > crio version 1.24.1
	I0213 22:34:21.470339   32908 command_runner.go:130] > Version:          1.24.1
	I0213 22:34:21.470350   32908 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0213 22:34:21.470357   32908 command_runner.go:130] > GitTreeState:     dirty
	I0213 22:34:21.470366   32908 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0213 22:34:21.470373   32908 command_runner.go:130] > GoVersion:        go1.19.9
	I0213 22:34:21.470379   32908 command_runner.go:130] > Compiler:         gc
	I0213 22:34:21.470386   32908 command_runner.go:130] > Platform:         linux/amd64
	I0213 22:34:21.470394   32908 command_runner.go:130] > Linkmode:         dynamic
	I0213 22:34:21.470404   32908 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0213 22:34:21.470413   32908 command_runner.go:130] > SeccompEnabled:   true
	I0213 22:34:21.470420   32908 command_runner.go:130] > AppArmorEnabled:  false
	I0213 22:34:21.472677   32908 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 22:34:21.474090   32908 out.go:177]   - env NO_PROXY=192.168.39.81
	I0213 22:34:21.475403   32908 out.go:177]   - env NO_PROXY=192.168.39.81,192.168.39.94
	I0213 22:34:21.476761   32908 main.go:141] libmachine: (multinode-413653-m03) Calling .GetIP
	I0213 22:34:21.479460   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:21.479797   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:dc:b8", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:22:44 +0000 UTC Type:0 Mac:52:54:00:83:dc:b8 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-413653-m03 Clientid:01:52:54:00:83:dc:b8}
	I0213 22:34:21.479832   32908 main.go:141] libmachine: (multinode-413653-m03) DBG | domain multinode-413653-m03 has defined IP address 192.168.39.178 and MAC address 52:54:00:83:dc:b8 in network mk-multinode-413653
	I0213 22:34:21.479998   32908 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 22:34:21.484882   32908 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0213 22:34:21.484977   32908 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653 for IP: 192.168.39.178
	I0213 22:34:21.484999   32908 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 22:34:21.485140   32908 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 22:34:21.485180   32908 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 22:34:21.485191   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0213 22:34:21.485207   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0213 22:34:21.485219   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0213 22:34:21.485231   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0213 22:34:21.485283   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 22:34:21.485311   32908 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 22:34:21.485323   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 22:34:21.485348   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 22:34:21.485372   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 22:34:21.485395   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 22:34:21.485432   32908 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 22:34:21.485476   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:34:21.485497   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem -> /usr/share/ca-certificates/16200.pem
	I0213 22:34:21.485517   32908 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> /usr/share/ca-certificates/162002.pem
	I0213 22:34:21.485923   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 22:34:21.511948   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 22:34:21.537437   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 22:34:21.569195   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 22:34:21.594979   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 22:34:21.619817   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 22:34:21.644448   32908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 22:34:21.669420   32908 ssh_runner.go:195] Run: openssl version
	I0213 22:34:21.675484   32908 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0213 22:34:21.675577   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 22:34:21.686663   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:34:21.691799   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:34:21.691822   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:34:21.691874   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 22:34:21.697798   32908 command_runner.go:130] > b5213941
	I0213 22:34:21.697910   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 22:34:21.708758   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 22:34:21.721007   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 22:34:21.726335   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:34:21.726576   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 22:34:21.726631   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 22:34:21.732723   32908 command_runner.go:130] > 51391683
	I0213 22:34:21.732802   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 22:34:21.742920   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 22:34:21.754647   32908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 22:34:21.759672   32908 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:34:21.759997   32908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 22:34:21.760058   32908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 22:34:21.766303   32908 command_runner.go:130] > 3ec20f2e
	I0213 22:34:21.766617   32908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 22:34:21.776997   32908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 22:34:21.781507   32908 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 22:34:21.781554   32908 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 22:34:21.781645   32908 ssh_runner.go:195] Run: crio config
	I0213 22:34:21.839648   32908 command_runner.go:130] ! time="2024-02-13 22:34:21.822746911Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0213 22:34:21.839679   32908 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0213 22:34:21.848449   32908 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0213 22:34:21.848473   32908 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0213 22:34:21.848480   32908 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0213 22:34:21.848484   32908 command_runner.go:130] > #
	I0213 22:34:21.848490   32908 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0213 22:34:21.848496   32908 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0213 22:34:21.848502   32908 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0213 22:34:21.848510   32908 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0213 22:34:21.848514   32908 command_runner.go:130] > # reload'.
	I0213 22:34:21.848520   32908 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0213 22:34:21.848527   32908 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0213 22:34:21.848533   32908 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0213 22:34:21.848542   32908 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0213 22:34:21.848547   32908 command_runner.go:130] > [crio]
	I0213 22:34:21.848555   32908 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0213 22:34:21.848562   32908 command_runner.go:130] > # containers images, in this directory.
	I0213 22:34:21.848573   32908 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0213 22:34:21.848585   32908 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0213 22:34:21.848596   32908 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0213 22:34:21.848609   32908 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0213 22:34:21.848620   32908 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0213 22:34:21.848630   32908 command_runner.go:130] > storage_driver = "overlay"
	I0213 22:34:21.848641   32908 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0213 22:34:21.848652   32908 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0213 22:34:21.848662   32908 command_runner.go:130] > storage_option = [
	I0213 22:34:21.848669   32908 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0213 22:34:21.848677   32908 command_runner.go:130] > ]
	I0213 22:34:21.848692   32908 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0213 22:34:21.848701   32908 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0213 22:34:21.848706   32908 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0213 22:34:21.848714   32908 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0213 22:34:21.848722   32908 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0213 22:34:21.848728   32908 command_runner.go:130] > # always happen on a node reboot
	I0213 22:34:21.848734   32908 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0213 22:34:21.848743   32908 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0213 22:34:21.848749   32908 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0213 22:34:21.848760   32908 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0213 22:34:21.848766   32908 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0213 22:34:21.848774   32908 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0213 22:34:21.848784   32908 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0213 22:34:21.848790   32908 command_runner.go:130] > # internal_wipe = true
	I0213 22:34:21.848795   32908 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0213 22:34:21.848803   32908 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0213 22:34:21.848809   32908 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0213 22:34:21.848816   32908 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0213 22:34:21.848822   32908 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0213 22:34:21.848827   32908 command_runner.go:130] > [crio.api]
	I0213 22:34:21.848832   32908 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0213 22:34:21.848839   32908 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0213 22:34:21.848845   32908 command_runner.go:130] > # IP address on which the stream server will listen.
	I0213 22:34:21.848852   32908 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0213 22:34:21.848859   32908 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0213 22:34:21.848867   32908 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0213 22:34:21.848872   32908 command_runner.go:130] > # stream_port = "0"
	I0213 22:34:21.848877   32908 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0213 22:34:21.848884   32908 command_runner.go:130] > # stream_enable_tls = false
	I0213 22:34:21.848890   32908 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0213 22:34:21.848898   32908 command_runner.go:130] > # stream_idle_timeout = ""
	I0213 22:34:21.848906   32908 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0213 22:34:21.848914   32908 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0213 22:34:21.848921   32908 command_runner.go:130] > # minutes.
	I0213 22:34:21.848925   32908 command_runner.go:130] > # stream_tls_cert = ""
	I0213 22:34:21.848933   32908 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0213 22:34:21.848942   32908 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0213 22:34:21.848948   32908 command_runner.go:130] > # stream_tls_key = ""
	I0213 22:34:21.848954   32908 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0213 22:34:21.848963   32908 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0213 22:34:21.848972   32908 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0213 22:34:21.848978   32908 command_runner.go:130] > # stream_tls_ca = ""
	I0213 22:34:21.848986   32908 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0213 22:34:21.848993   32908 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0213 22:34:21.849000   32908 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0213 22:34:21.849007   32908 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0213 22:34:21.849021   32908 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0213 22:34:21.849029   32908 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0213 22:34:21.849035   32908 command_runner.go:130] > [crio.runtime]
	I0213 22:34:21.849041   32908 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0213 22:34:21.849049   32908 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0213 22:34:21.849056   32908 command_runner.go:130] > # "nofile=1024:2048"
	I0213 22:34:21.849062   32908 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0213 22:34:21.849069   32908 command_runner.go:130] > # default_ulimits = [
	I0213 22:34:21.849072   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849081   32908 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0213 22:34:21.849085   32908 command_runner.go:130] > # no_pivot = false
	I0213 22:34:21.849093   32908 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0213 22:34:21.849099   32908 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0213 22:34:21.849106   32908 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0213 22:34:21.849112   32908 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0213 22:34:21.849119   32908 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0213 22:34:21.849125   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0213 22:34:21.849132   32908 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0213 22:34:21.849137   32908 command_runner.go:130] > # Cgroup setting for conmon
	I0213 22:34:21.849145   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0213 22:34:21.849152   32908 command_runner.go:130] > conmon_cgroup = "pod"
	I0213 22:34:21.849158   32908 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0213 22:34:21.849166   32908 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0213 22:34:21.849174   32908 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0213 22:34:21.849180   32908 command_runner.go:130] > conmon_env = [
	I0213 22:34:21.849186   32908 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0213 22:34:21.849197   32908 command_runner.go:130] > ]
	I0213 22:34:21.849208   32908 command_runner.go:130] > # Additional environment variables to set for all the
	I0213 22:34:21.849218   32908 command_runner.go:130] > # containers. These are overridden if set in the
	I0213 22:34:21.849229   32908 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0213 22:34:21.849240   32908 command_runner.go:130] > # default_env = [
	I0213 22:34:21.849248   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849261   32908 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0213 22:34:21.849273   32908 command_runner.go:130] > # selinux = false
	I0213 22:34:21.849291   32908 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0213 22:34:21.849301   32908 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0213 22:34:21.849313   32908 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0213 22:34:21.849324   32908 command_runner.go:130] > # seccomp_profile = ""
	I0213 22:34:21.849336   32908 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0213 22:34:21.849348   32908 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0213 22:34:21.849360   32908 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0213 22:34:21.849372   32908 command_runner.go:130] > # which might increase security.
	I0213 22:34:21.849382   32908 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0213 22:34:21.849393   32908 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0213 22:34:21.849406   32908 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0213 22:34:21.849418   32908 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0213 22:34:21.849428   32908 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0213 22:34:21.849435   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:34:21.849443   32908 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0213 22:34:21.849449   32908 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0213 22:34:21.849455   32908 command_runner.go:130] > # the cgroup blockio controller.
	I0213 22:34:21.849460   32908 command_runner.go:130] > # blockio_config_file = ""
	I0213 22:34:21.849468   32908 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0213 22:34:21.849472   32908 command_runner.go:130] > # irqbalance daemon.
	I0213 22:34:21.849478   32908 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0213 22:34:21.849485   32908 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0213 22:34:21.849492   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:34:21.849497   32908 command_runner.go:130] > # rdt_config_file = ""
	I0213 22:34:21.849502   32908 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0213 22:34:21.849507   32908 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0213 22:34:21.849514   32908 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0213 22:34:21.849520   32908 command_runner.go:130] > # separate_pull_cgroup = ""
	I0213 22:34:21.849527   32908 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0213 22:34:21.849535   32908 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0213 22:34:21.849539   32908 command_runner.go:130] > # will be added.
	I0213 22:34:21.849546   32908 command_runner.go:130] > # default_capabilities = [
	I0213 22:34:21.849551   32908 command_runner.go:130] > # 	"CHOWN",
	I0213 22:34:21.849557   32908 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0213 22:34:21.849560   32908 command_runner.go:130] > # 	"FSETID",
	I0213 22:34:21.849566   32908 command_runner.go:130] > # 	"FOWNER",
	I0213 22:34:21.849570   32908 command_runner.go:130] > # 	"SETGID",
	I0213 22:34:21.849574   32908 command_runner.go:130] > # 	"SETUID",
	I0213 22:34:21.849578   32908 command_runner.go:130] > # 	"SETPCAP",
	I0213 22:34:21.849584   32908 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0213 22:34:21.849588   32908 command_runner.go:130] > # 	"KILL",
	I0213 22:34:21.849594   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849600   32908 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0213 22:34:21.849608   32908 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0213 22:34:21.849613   32908 command_runner.go:130] > # default_sysctls = [
	I0213 22:34:21.849616   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849623   32908 command_runner.go:130] > # List of devices on the host that a
	I0213 22:34:21.849629   32908 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0213 22:34:21.849634   32908 command_runner.go:130] > # allowed_devices = [
	I0213 22:34:21.849640   32908 command_runner.go:130] > # 	"/dev/fuse",
	I0213 22:34:21.849643   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849651   32908 command_runner.go:130] > # List of additional devices. specified as
	I0213 22:34:21.849658   32908 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0213 22:34:21.849665   32908 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0213 22:34:21.849682   32908 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0213 22:34:21.849688   32908 command_runner.go:130] > # additional_devices = [
	I0213 22:34:21.849692   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849698   32908 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0213 22:34:21.849704   32908 command_runner.go:130] > # cdi_spec_dirs = [
	I0213 22:34:21.849708   32908 command_runner.go:130] > # 	"/etc/cdi",
	I0213 22:34:21.849714   32908 command_runner.go:130] > # 	"/var/run/cdi",
	I0213 22:34:21.849718   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849727   32908 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0213 22:34:21.849735   32908 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0213 22:34:21.849742   32908 command_runner.go:130] > # Defaults to false.
	I0213 22:34:21.849747   32908 command_runner.go:130] > # device_ownership_from_security_context = false
	I0213 22:34:21.849755   32908 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0213 22:34:21.849764   32908 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0213 22:34:21.849771   32908 command_runner.go:130] > # hooks_dir = [
	I0213 22:34:21.849775   32908 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0213 22:34:21.849781   32908 command_runner.go:130] > # ]
	I0213 22:34:21.849787   32908 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0213 22:34:21.849796   32908 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0213 22:34:21.849803   32908 command_runner.go:130] > # its default mounts from the following two files:
	I0213 22:34:21.849806   32908 command_runner.go:130] > #
	I0213 22:34:21.849815   32908 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0213 22:34:21.849822   32908 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0213 22:34:21.849830   32908 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0213 22:34:21.849834   32908 command_runner.go:130] > #
	I0213 22:34:21.849840   32908 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0213 22:34:21.849849   32908 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0213 22:34:21.849857   32908 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0213 22:34:21.849864   32908 command_runner.go:130] > #      only add mounts it finds in this file.
	I0213 22:34:21.849885   32908 command_runner.go:130] > #
	I0213 22:34:21.849895   32908 command_runner.go:130] > # default_mounts_file = ""
	I0213 22:34:21.849902   32908 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0213 22:34:21.849911   32908 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0213 22:34:21.849917   32908 command_runner.go:130] > pids_limit = 1024
	I0213 22:34:21.849924   32908 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0213 22:34:21.849932   32908 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0213 22:34:21.849941   32908 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0213 22:34:21.849951   32908 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0213 22:34:21.849957   32908 command_runner.go:130] > # log_size_max = -1
	I0213 22:34:21.849964   32908 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0213 22:34:21.849970   32908 command_runner.go:130] > # log_to_journald = false
	I0213 22:34:21.849977   32908 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0213 22:34:21.849984   32908 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0213 22:34:21.849993   32908 command_runner.go:130] > # Path to directory for container attach sockets.
	I0213 22:34:21.850000   32908 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0213 22:34:21.850006   32908 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0213 22:34:21.850013   32908 command_runner.go:130] > # bind_mount_prefix = ""
	I0213 22:34:21.850019   32908 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0213 22:34:21.850025   32908 command_runner.go:130] > # read_only = false
	I0213 22:34:21.850031   32908 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0213 22:34:21.850044   32908 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0213 22:34:21.850055   32908 command_runner.go:130] > # live configuration reload.
	I0213 22:34:21.850065   32908 command_runner.go:130] > # log_level = "info"
	I0213 22:34:21.850076   32908 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0213 22:34:21.850087   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:34:21.850096   32908 command_runner.go:130] > # log_filter = ""
	I0213 22:34:21.850109   32908 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0213 22:34:21.850121   32908 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0213 22:34:21.850131   32908 command_runner.go:130] > # separated by comma.
	I0213 22:34:21.850138   32908 command_runner.go:130] > # uid_mappings = ""
	I0213 22:34:21.850147   32908 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0213 22:34:21.850161   32908 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0213 22:34:21.850171   32908 command_runner.go:130] > # separated by comma.
	I0213 22:34:21.850180   32908 command_runner.go:130] > # gid_mappings = ""
	I0213 22:34:21.850194   32908 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0213 22:34:21.850204   32908 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0213 22:34:21.850212   32908 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0213 22:34:21.850219   32908 command_runner.go:130] > # minimum_mappable_uid = -1
	I0213 22:34:21.850225   32908 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0213 22:34:21.850234   32908 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0213 22:34:21.850242   32908 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0213 22:34:21.850249   32908 command_runner.go:130] > # minimum_mappable_gid = -1
	I0213 22:34:21.850255   32908 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0213 22:34:21.850263   32908 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0213 22:34:21.850271   32908 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0213 22:34:21.850284   32908 command_runner.go:130] > # ctr_stop_timeout = 30
	I0213 22:34:21.850292   32908 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0213 22:34:21.850298   32908 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0213 22:34:21.850306   32908 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0213 22:34:21.850314   32908 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0213 22:34:21.850319   32908 command_runner.go:130] > drop_infra_ctr = false
	I0213 22:34:21.850330   32908 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0213 22:34:21.850340   32908 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0213 22:34:21.850354   32908 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0213 22:34:21.850364   32908 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0213 22:34:21.850376   32908 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0213 22:34:21.850388   32908 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0213 22:34:21.850398   32908 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0213 22:34:21.850412   32908 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0213 22:34:21.850422   32908 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0213 22:34:21.850434   32908 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0213 22:34:21.850446   32908 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0213 22:34:21.850459   32908 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0213 22:34:21.850469   32908 command_runner.go:130] > # default_runtime = "runc"
	I0213 22:34:21.850481   32908 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0213 22:34:21.850496   32908 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0213 22:34:21.850508   32908 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0213 22:34:21.850515   32908 command_runner.go:130] > # creation as a file is not desired either.
	I0213 22:34:21.850523   32908 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0213 22:34:21.850531   32908 command_runner.go:130] > # the hostname is being managed dynamically.
	I0213 22:34:21.850535   32908 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0213 22:34:21.850540   32908 command_runner.go:130] > # ]
	I0213 22:34:21.850546   32908 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0213 22:34:21.850554   32908 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0213 22:34:21.850561   32908 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0213 22:34:21.850567   32908 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0213 22:34:21.850573   32908 command_runner.go:130] > #
	I0213 22:34:21.850578   32908 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0213 22:34:21.850585   32908 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0213 22:34:21.850589   32908 command_runner.go:130] > #  runtime_type = "oci"
	I0213 22:34:21.850595   32908 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0213 22:34:21.850602   32908 command_runner.go:130] > #  privileged_without_host_devices = false
	I0213 22:34:21.850612   32908 command_runner.go:130] > #  allowed_annotations = []
	I0213 22:34:21.850621   32908 command_runner.go:130] > # Where:
	I0213 22:34:21.850633   32908 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0213 22:34:21.850645   32908 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0213 22:34:21.850658   32908 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0213 22:34:21.850671   32908 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0213 22:34:21.850680   32908 command_runner.go:130] > #   in $PATH.
	I0213 22:34:21.850692   32908 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0213 22:34:21.850702   32908 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0213 22:34:21.850716   32908 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0213 22:34:21.850727   32908 command_runner.go:130] > #   state.
	I0213 22:34:21.850740   32908 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0213 22:34:21.850752   32908 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0213 22:34:21.850765   32908 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0213 22:34:21.850777   32908 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0213 22:34:21.850790   32908 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0213 22:34:21.850804   32908 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0213 22:34:21.850815   32908 command_runner.go:130] > #   The currently recognized values are:
	I0213 22:34:21.850829   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0213 22:34:21.850844   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0213 22:34:21.850856   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0213 22:34:21.850870   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0213 22:34:21.850885   32908 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0213 22:34:21.850898   32908 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0213 22:34:21.850911   32908 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0213 22:34:21.850925   32908 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0213 22:34:21.850936   32908 command_runner.go:130] > #   should be moved to the container's cgroup
	I0213 22:34:21.850947   32908 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0213 22:34:21.850957   32908 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0213 22:34:21.850967   32908 command_runner.go:130] > runtime_type = "oci"
	I0213 22:34:21.850977   32908 command_runner.go:130] > runtime_root = "/run/runc"
	I0213 22:34:21.850987   32908 command_runner.go:130] > runtime_config_path = ""
	I0213 22:34:21.850996   32908 command_runner.go:130] > monitor_path = ""
	I0213 22:34:21.851006   32908 command_runner.go:130] > monitor_cgroup = ""
	I0213 22:34:21.851013   32908 command_runner.go:130] > monitor_exec_cgroup = ""
	I0213 22:34:21.851026   32908 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0213 22:34:21.851036   32908 command_runner.go:130] > # running containers
	I0213 22:34:21.851047   32908 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0213 22:34:21.851060   32908 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0213 22:34:21.851095   32908 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0213 22:34:21.851109   32908 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0213 22:34:21.851117   32908 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0213 22:34:21.851128   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0213 22:34:21.851139   32908 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0213 22:34:21.851150   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0213 22:34:21.851160   32908 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0213 22:34:21.851172   32908 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0213 22:34:21.851185   32908 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0213 22:34:21.851197   32908 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0213 22:34:21.851210   32908 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0213 22:34:21.851226   32908 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0213 22:34:21.851240   32908 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0213 22:34:21.851253   32908 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0213 22:34:21.851270   32908 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0213 22:34:21.851291   32908 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0213 22:34:21.851304   32908 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0213 22:34:21.851315   32908 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0213 22:34:21.851325   32908 command_runner.go:130] > # Example:
	I0213 22:34:21.851336   32908 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0213 22:34:21.851347   32908 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0213 22:34:21.851358   32908 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0213 22:34:21.851369   32908 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0213 22:34:21.851380   32908 command_runner.go:130] > # cpuset = 0
	I0213 22:34:21.851390   32908 command_runner.go:130] > # cpushares = "0-1"
	I0213 22:34:21.851397   32908 command_runner.go:130] > # Where:
	I0213 22:34:21.851408   32908 command_runner.go:130] > # The workload name is workload-type.
	I0213 22:34:21.851423   32908 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0213 22:34:21.851435   32908 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0213 22:34:21.851447   32908 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0213 22:34:21.851463   32908 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0213 22:34:21.851476   32908 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0213 22:34:21.851484   32908 command_runner.go:130] > # 
	I0213 22:34:21.851495   32908 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0213 22:34:21.851504   32908 command_runner.go:130] > #
	I0213 22:34:21.851514   32908 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0213 22:34:21.851526   32908 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0213 22:34:21.851541   32908 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0213 22:34:21.851554   32908 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0213 22:34:21.851566   32908 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0213 22:34:21.851575   32908 command_runner.go:130] > [crio.image]
	I0213 22:34:21.851585   32908 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0213 22:34:21.851595   32908 command_runner.go:130] > # default_transport = "docker://"
	I0213 22:34:21.851618   32908 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0213 22:34:21.851630   32908 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0213 22:34:21.851639   32908 command_runner.go:130] > # global_auth_file = ""
	I0213 22:34:21.851648   32908 command_runner.go:130] > # The image used to instantiate infra containers.
	I0213 22:34:21.851659   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:34:21.851669   32908 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0213 22:34:21.851682   32908 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0213 22:34:21.851693   32908 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0213 22:34:21.851704   32908 command_runner.go:130] > # This option supports live configuration reload.
	I0213 22:34:21.851714   32908 command_runner.go:130] > # pause_image_auth_file = ""
	I0213 22:34:21.851726   32908 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0213 22:34:21.851739   32908 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0213 22:34:21.851753   32908 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0213 22:34:21.851766   32908 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0213 22:34:21.851776   32908 command_runner.go:130] > # pause_command = "/pause"
	I0213 22:34:21.851789   32908 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0213 22:34:21.851802   32908 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0213 22:34:21.851815   32908 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0213 22:34:21.851828   32908 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0213 22:34:21.851840   32908 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0213 22:34:21.851850   32908 command_runner.go:130] > # signature_policy = ""
	I0213 22:34:21.851860   32908 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0213 22:34:21.851869   32908 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0213 22:34:21.851876   32908 command_runner.go:130] > # changing them here.
	I0213 22:34:21.851881   32908 command_runner.go:130] > # insecure_registries = [
	I0213 22:34:21.851886   32908 command_runner.go:130] > # ]
	I0213 22:34:21.851895   32908 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0213 22:34:21.851902   32908 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0213 22:34:21.851911   32908 command_runner.go:130] > # image_volumes = "mkdir"
	I0213 22:34:21.851919   32908 command_runner.go:130] > # Temporary directory to use for storing big files
	I0213 22:34:21.851925   32908 command_runner.go:130] > # big_files_temporary_dir = ""
	I0213 22:34:21.851934   32908 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0213 22:34:21.851941   32908 command_runner.go:130] > # CNI plugins.
	I0213 22:34:21.851945   32908 command_runner.go:130] > [crio.network]
	I0213 22:34:21.851954   32908 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0213 22:34:21.851961   32908 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0213 22:34:21.851968   32908 command_runner.go:130] > # cni_default_network = ""
	I0213 22:34:21.851978   32908 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0213 22:34:21.851984   32908 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0213 22:34:21.851990   32908 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0213 22:34:21.851996   32908 command_runner.go:130] > # plugin_dirs = [
	I0213 22:34:21.852001   32908 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0213 22:34:21.852007   32908 command_runner.go:130] > # ]
	I0213 22:34:21.852014   32908 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0213 22:34:21.852020   32908 command_runner.go:130] > [crio.metrics]
	I0213 22:34:21.852025   32908 command_runner.go:130] > # Globally enable or disable metrics support.
	I0213 22:34:21.852032   32908 command_runner.go:130] > enable_metrics = true
	I0213 22:34:21.852037   32908 command_runner.go:130] > # Specify enabled metrics collectors.
	I0213 22:34:21.852044   32908 command_runner.go:130] > # Per default all metrics are enabled.
	I0213 22:34:21.852050   32908 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0213 22:34:21.852059   32908 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0213 22:34:21.852067   32908 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0213 22:34:21.852073   32908 command_runner.go:130] > # metrics_collectors = [
	I0213 22:34:21.852077   32908 command_runner.go:130] > # 	"operations",
	I0213 22:34:21.852085   32908 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0213 22:34:21.852092   32908 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0213 22:34:21.852096   32908 command_runner.go:130] > # 	"operations_errors",
	I0213 22:34:21.852103   32908 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0213 22:34:21.852108   32908 command_runner.go:130] > # 	"image_pulls_by_name",
	I0213 22:34:21.852114   32908 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0213 22:34:21.852119   32908 command_runner.go:130] > # 	"image_pulls_failures",
	I0213 22:34:21.852125   32908 command_runner.go:130] > # 	"image_pulls_successes",
	I0213 22:34:21.852129   32908 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0213 22:34:21.852136   32908 command_runner.go:130] > # 	"image_layer_reuse",
	I0213 22:34:21.852140   32908 command_runner.go:130] > # 	"containers_oom_total",
	I0213 22:34:21.852146   32908 command_runner.go:130] > # 	"containers_oom",
	I0213 22:34:21.852150   32908 command_runner.go:130] > # 	"processes_defunct",
	I0213 22:34:21.852156   32908 command_runner.go:130] > # 	"operations_total",
	I0213 22:34:21.852161   32908 command_runner.go:130] > # 	"operations_latency_seconds",
	I0213 22:34:21.852168   32908 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0213 22:34:21.852172   32908 command_runner.go:130] > # 	"operations_errors_total",
	I0213 22:34:21.852179   32908 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0213 22:34:21.852185   32908 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0213 22:34:21.852191   32908 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0213 22:34:21.852196   32908 command_runner.go:130] > # 	"image_pulls_success_total",
	I0213 22:34:21.852202   32908 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0213 22:34:21.852206   32908 command_runner.go:130] > # 	"containers_oom_count_total",
	I0213 22:34:21.852212   32908 command_runner.go:130] > # ]
	I0213 22:34:21.852217   32908 command_runner.go:130] > # The port on which the metrics server will listen.
	I0213 22:34:21.852224   32908 command_runner.go:130] > # metrics_port = 9090
	I0213 22:34:21.852229   32908 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0213 22:34:21.852235   32908 command_runner.go:130] > # metrics_socket = ""
	I0213 22:34:21.852240   32908 command_runner.go:130] > # The certificate for the secure metrics server.
	I0213 22:34:21.852248   32908 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0213 22:34:21.852257   32908 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0213 22:34:21.852264   32908 command_runner.go:130] > # certificate on any modification event.
	I0213 22:34:21.852269   32908 command_runner.go:130] > # metrics_cert = ""
	I0213 22:34:21.852281   32908 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0213 22:34:21.852288   32908 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0213 22:34:21.852292   32908 command_runner.go:130] > # metrics_key = ""
	I0213 22:34:21.852300   32908 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0213 22:34:21.852305   32908 command_runner.go:130] > [crio.tracing]
	I0213 22:34:21.852311   32908 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0213 22:34:21.852317   32908 command_runner.go:130] > # enable_tracing = false
	I0213 22:34:21.852323   32908 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0213 22:34:21.852330   32908 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0213 22:34:21.852335   32908 command_runner.go:130] > # Number of samples to collect per million spans.
	I0213 22:34:21.852342   32908 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0213 22:34:21.852348   32908 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0213 22:34:21.852354   32908 command_runner.go:130] > [crio.stats]
	I0213 22:34:21.852360   32908 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0213 22:34:21.852368   32908 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0213 22:34:21.852373   32908 command_runner.go:130] > # stats_collection_period = 0
	I0213 22:34:21.852448   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:34:21.852458   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:34:21.852466   32908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 22:34:21.852485   32908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.178 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-413653 NodeName:multinode-413653-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 22:34:21.852588   32908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-413653-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 22:34:21.852636   32908 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-413653-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 22:34:21.852683   32908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 22:34:21.862880   32908 command_runner.go:130] > kubeadm
	I0213 22:34:21.862902   32908 command_runner.go:130] > kubectl
	I0213 22:34:21.862906   32908 command_runner.go:130] > kubelet
	I0213 22:34:21.862939   32908 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 22:34:21.862996   32908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0213 22:34:21.872611   32908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0213 22:34:21.889787   32908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 22:34:21.906454   32908 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I0213 22:34:21.910503   32908 command_runner.go:130] > 192.168.39.81	control-plane.minikube.internal
	I0213 22:34:21.910559   32908 host.go:66] Checking if "multinode-413653" exists ...
	I0213 22:34:21.910807   32908 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:34:21.910954   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:34:21.910998   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:34:21.925461   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0213 22:34:21.925854   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:34:21.926297   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:34:21.926319   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:34:21.926601   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:34:21.926773   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:34:21.926936   32908 start.go:304] JoinCluster: &{Name:multinode-413653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-413653 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:34:21.927055   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0213 22:34:21.927071   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:34:21.929923   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:34:21.930304   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:34:21.930335   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:34:21.930513   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:34:21.930735   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:34:21.930911   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:34:21.931063   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:34:22.134724   32908 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dj4ei2.5hvmefh7uge58kmr --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 22:34:22.134860   32908 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0213 22:34:22.134911   32908 host.go:66] Checking if "multinode-413653" exists ...
	I0213 22:34:22.135318   32908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:34:22.135373   32908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:34:22.149496   32908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46597
	I0213 22:34:22.149957   32908 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:34:22.150435   32908 main.go:141] libmachine: Using API Version  1
	I0213 22:34:22.150464   32908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:34:22.150773   32908 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:34:22.150946   32908 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:34:22.151166   32908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-413653-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0213 22:34:22.151188   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:34:22.154116   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:34:22.154486   32908 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:34:22.154513   32908 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:34:22.154681   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:34:22.154864   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:34:22.155029   32908 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:34:22.155165   32908 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:34:22.365457   32908 command_runner.go:130] > node/multinode-413653-m03 cordoned
	I0213 22:34:25.407477   32908 command_runner.go:130] > pod "busybox-5b5d89c9d6-xcg58" has DeletionTimestamp older than 1 seconds, skipping
	I0213 22:34:25.407511   32908 command_runner.go:130] > node/multinode-413653-m03 drained
	I0213 22:34:25.408704   32908 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0213 22:34:25.408731   32908 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-p2bqz, kube-system/kube-proxy-k4ggx
	I0213 22:34:25.408752   32908 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-413653-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.257566261s)
	I0213 22:34:25.408768   32908 node.go:108] successfully drained node "m03"
	I0213 22:34:25.409204   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:34:25.409477   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:34:25.409787   32908 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0213 22:34:25.409840   32908 round_trippers.go:463] DELETE https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:34:25.409851   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:25.409863   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:25.409912   32908 round_trippers.go:473]     Content-Type: application/json
	I0213 22:34:25.409925   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:25.423226   32908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0213 22:34:25.423253   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:25.423262   32908 round_trippers.go:580]     Audit-Id: 28f5e1d5-96fc-4146-8cd3-f744bdda8dcd
	I0213 22:34:25.423270   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:25.423278   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:25.423285   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:25.423294   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:25.423311   32908 round_trippers.go:580]     Content-Length: 171
	I0213 22:34:25.423324   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:25 GMT
	I0213 22:34:25.423602   32908 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-413653-m03","kind":"nodes","uid":"3fd11080-7896-4845-a0ac-96b51f08d0cd"}}
	I0213 22:34:25.423661   32908 node.go:124] successfully deleted node "m03"
	I0213 22:34:25.423672   32908 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0213 22:34:25.423695   32908 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0213 22:34:25.423726   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dj4ei2.5hvmefh7uge58kmr --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-413653-m03"
	I0213 22:34:25.491579   32908 command_runner.go:130] > [preflight] Running pre-flight checks
	I0213 22:34:25.692012   32908 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0213 22:34:25.692050   32908 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0213 22:34:25.751812   32908 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 22:34:25.751993   32908 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 22:34:25.752281   32908 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0213 22:34:25.897779   32908 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0213 22:34:26.421755   32908 command_runner.go:130] > This node has joined the cluster:
	I0213 22:34:26.421785   32908 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0213 22:34:26.421792   32908 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0213 22:34:26.421798   32908 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0213 22:34:26.424663   32908 command_runner.go:130] ! W0213 22:34:25.474481    2347 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0213 22:34:26.424696   32908 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0213 22:34:26.424708   32908 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0213 22:34:26.424721   32908 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0213 22:34:26.424746   32908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dj4ei2.5hvmefh7uge58kmr --discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-413653-m03": (1.001002881s)
	I0213 22:34:26.424768   32908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0213 22:34:26.699659   32908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=multinode-413653 minikube.k8s.io/updated_at=2024_02_13T22_34_26_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 22:34:26.809833   32908 command_runner.go:130] > node/multinode-413653-m02 labeled
	I0213 22:34:26.825093   32908 command_runner.go:130] > node/multinode-413653-m03 labeled
	I0213 22:34:26.826813   32908 start.go:306] JoinCluster complete in 4.899873381s
	I0213 22:34:26.826838   32908 cni.go:84] Creating CNI manager for ""
	I0213 22:34:26.826845   32908 cni.go:136] 3 nodes found, recommending kindnet
	I0213 22:34:26.826901   32908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0213 22:34:26.832319   32908 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0213 22:34:26.832348   32908 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0213 22:34:26.832359   32908 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0213 22:34:26.832365   32908 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0213 22:34:26.832371   32908 command_runner.go:130] > Access: 2024-02-13 22:30:12.772425470 +0000
	I0213 22:34:26.832376   32908 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0213 22:34:26.832381   32908 command_runner.go:130] > Change: 2024-02-13 22:30:10.912425470 +0000
	I0213 22:34:26.832385   32908 command_runner.go:130] >  Birth: -
	I0213 22:34:26.832433   32908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0213 22:34:26.832446   32908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0213 22:34:26.852950   32908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0213 22:34:27.262879   32908 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0213 22:34:27.262911   32908 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0213 22:34:27.262921   32908 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0213 22:34:27.262928   32908 command_runner.go:130] > daemonset.apps/kindnet configured
	I0213 22:34:27.263334   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:34:27.263641   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:34:27.263940   32908 round_trippers.go:463] GET https://192.168.39.81:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0213 22:34:27.263953   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.263963   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.263971   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.267538   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:34:27.267560   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.267566   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.267572   32908 round_trippers.go:580]     Content-Length: 291
	I0213 22:34:27.267579   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.267586   32908 round_trippers.go:580]     Audit-Id: b3f791fc-c717-4a3f-b3dd-888bbb9c7c18
	I0213 22:34:27.267598   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.267611   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.267618   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.267641   32908 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"eccb91db-2bff-44e5-a49d-713d6c3d3d2b","resourceVersion":"856","creationTimestamp":"2024-02-13T22:20:28Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0213 22:34:27.267730   32908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-413653" context rescaled to 1 replicas
	I0213 22:34:27.267760   32908 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.178 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0213 22:34:27.269671   32908 out.go:177] * Verifying Kubernetes components...
	I0213 22:34:27.270921   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:34:27.285083   32908 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:34:27.285292   32908 kapi.go:59] client config for multinode-413653: &rest.Config{Host:"https://192.168.39.81:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.crt", KeyFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/profiles/multinode-413653/client.key", CAFile:"/home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPro
tos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c294a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0213 22:34:27.285496   32908 node_ready.go:35] waiting up to 6m0s for node "multinode-413653-m03" to be "Ready" ...
	I0213 22:34:27.285561   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:34:27.285569   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.285578   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.285584   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.288195   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:27.288220   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.288231   32908 round_trippers.go:580]     Audit-Id: f2f7fadc-76a6-48fd-ad93-17ba7d955dfb
	I0213 22:34:27.288240   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.288247   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.288256   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.288266   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.288277   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.288516   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m03","uid":"1d8b0755-144d-410b-b11a-64be052ac069","resourceVersion":"1186","creationTimestamp":"2024-02-13T22:34:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_34_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0213 22:34:27.288790   32908 node_ready.go:49] node "multinode-413653-m03" has status "Ready":"True"
	I0213 22:34:27.288805   32908 node_ready.go:38] duration metric: took 3.295431ms waiting for node "multinode-413653-m03" to be "Ready" ...
	I0213 22:34:27.288814   32908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:34:27.288876   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods
	I0213 22:34:27.288885   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.288892   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.288899   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.292591   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:34:27.292613   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.292620   32908 round_trippers.go:580]     Audit-Id: 4212236d-e917-4b67-bb82-7c34c9d85620
	I0213 22:34:27.292626   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.292632   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.292637   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.292645   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.292652   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.294360   32908 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1192"},"items":[{"metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82039 chars]
	I0213 22:34:27.296874   32908 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.296980   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-lq7xh
	I0213 22:34:27.296992   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.297002   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.297010   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.300053   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:34:27.300077   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.300088   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.300096   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.300104   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.300112   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.300121   32908 round_trippers.go:580]     Audit-Id: 9867d5f6-0959-4e60-a2d5-3b3ee398c1cc
	I0213 22:34:27.300133   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.300260   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-lq7xh","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2543314d-46b0-490c-b0e1-74f4777913f9","resourceVersion":"842","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"56faf738-4578-4b9c-9642-bb213edc2932","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56faf738-4578-4b9c-9642-bb213edc2932\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I0213 22:34:27.300741   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:27.300755   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.300762   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.300768   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.303195   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:27.303217   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.303227   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.303235   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.303243   32908 round_trippers.go:580]     Audit-Id: 850f8ae2-11bb-47b9-bf66-6792043adfa2
	I0213 22:34:27.303251   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.303259   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.303272   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.303701   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:34:27.304064   32908 pod_ready.go:92] pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:27.304083   32908 pod_ready.go:81] duration metric: took 7.178053ms waiting for pod "coredns-5dd5756b68-lq7xh" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.304092   32908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.304150   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-413653
	I0213 22:34:27.304161   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.304172   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.304183   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.306477   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:27.306498   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.306508   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.306517   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.306526   32908 round_trippers.go:580]     Audit-Id: 00dd2cc2-8903-451b-8f78-e973d4baf8c5
	I0213 22:34:27.306534   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.306543   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.306553   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.306683   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-413653","namespace":"kube-system","uid":"6adf5771-f03b-47ca-ad97-384b664fb8ab","resourceVersion":"833","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.81:2379","kubernetes.io/config.hash":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.mirror":"1228900b89b8f450a3daa0ff9995359c","kubernetes.io/config.seen":"2024-02-13T22:20:28.219611587Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I0213 22:34:27.307025   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:27.307040   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.307050   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.307059   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.309395   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:27.309416   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.309426   32908 round_trippers.go:580]     Audit-Id: 3aa85f14-a36f-4dec-9df1-55c4a8b42057
	I0213 22:34:27.309434   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.309442   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.309450   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.309464   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.309471   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.309777   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:34:27.310177   32908 pod_ready.go:92] pod "etcd-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:27.310197   32908 pod_ready.go:81] duration metric: took 6.098379ms waiting for pod "etcd-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.310221   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.310290   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-413653
	I0213 22:34:27.310301   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.310312   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.310324   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.316343   32908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0213 22:34:27.316368   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.316378   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.316386   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.316395   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.316404   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.316423   32908 round_trippers.go:580]     Audit-Id: a60b5403-112b-4461-8634-7791fc89c6e5
	I0213 22:34:27.316432   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.316614   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-413653","namespace":"kube-system","uid":"1540a1dc-5f90-45b2-8d9e-0f0a1581328a","resourceVersion":"860","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.81:8443","kubernetes.io/config.hash":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.mirror":"cba523270c42e30a16923f778faad5a9","kubernetes.io/config.seen":"2024-02-13T22:20:28.219614628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I0213 22:34:27.317098   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:27.317116   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.317128   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.317138   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.321534   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:34:27.321558   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.321570   32908 round_trippers.go:580]     Audit-Id: fe1d732a-ab81-42ee-8d98-94e223e96e95
	I0213 22:34:27.321579   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.321594   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.321602   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.321608   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.321613   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.321848   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:34:27.322239   32908 pod_ready.go:92] pod "kube-apiserver-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:27.322258   32908 pod_ready.go:81] duration metric: took 12.025765ms waiting for pod "kube-apiserver-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.322267   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.322315   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-413653
	I0213 22:34:27.322322   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.322332   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.322344   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.326204   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:34:27.326223   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.326230   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.326235   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.326242   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.326251   32908 round_trippers.go:580]     Audit-Id: 0b6fb050-c0fd-4ac3-b379-c82e9d9c54ec
	I0213 22:34:27.326261   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.326272   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.326681   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-413653","namespace":"kube-system","uid":"1d3432c0-f2cd-4371-9599-9a119dc1a8ab","resourceVersion":"835","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.mirror":"20ffed0771c7655c7c1ab2401f5bc8cd","kubernetes.io/config.seen":"2024-02-13T22:20:28.219615864Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I0213 22:34:27.327116   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:27.327133   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.327141   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.327148   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.329232   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:27.329252   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.329261   32908 round_trippers.go:580]     Audit-Id: 4dbd63df-4d14-4ab5-b9b8-2ed12ffab2d1
	I0213 22:34:27.329269   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.329276   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.329284   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.329293   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.329301   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.329601   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:34:27.330018   32908 pod_ready.go:92] pod "kube-controller-manager-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:27.330040   32908 pod_ready.go:81] duration metric: took 7.766898ms waiting for pod "kube-controller-manager-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.330053   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.486460   32908 request.go:629] Waited for 156.32732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:34:27.486552   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26ww9
	I0213 22:34:27.486565   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.486576   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.486593   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.490262   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:34:27.490287   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.490299   32908 round_trippers.go:580]     Audit-Id: 56f2ede7-8608-4417-aee1-8fda4083fa2b
	I0213 22:34:27.490309   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.490319   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.490335   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.490349   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.490361   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.490616   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-26ww9","generateName":"kube-proxy-","namespace":"kube-system","uid":"2b00e8eb-8829-460d-a162-7fe8c783c260","resourceVersion":"1026","creationTimestamp":"2024-02-13T22:21:19Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:21:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0213 22:34:27.686442   32908 request.go:629] Waited for 195.395984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:34:27.686515   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m02
	I0213 22:34:27.686521   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.686529   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.686541   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.689955   32908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0213 22:34:27.689980   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.689987   32908 round_trippers.go:580]     Audit-Id: 258b9a6b-3e53-45dc-9406-f513086a84d0
	I0213 22:34:27.689993   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.689998   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.690003   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.690008   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.690013   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.690679   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m02","uid":"1e9c4839-34c8-4278-ae96-8c649be816a3","resourceVersion":"1185","creationTimestamp":"2024-02-13T22:32:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_34_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:32:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0213 22:34:27.690991   32908 pod_ready.go:92] pod "kube-proxy-26ww9" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:27.691010   32908 pod_ready.go:81] duration metric: took 360.9451ms waiting for pod "kube-proxy-26ww9" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.691023   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:27.886136   32908 request.go:629] Waited for 195.046158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:34:27.886205   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h5bvp
	I0213 22:34:27.886214   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:27.886226   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:27.886244   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:27.890530   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:34:27.890563   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:27.890574   32908 round_trippers.go:580]     Audit-Id: 82cf3f2c-985a-43fa-ad93-e30e2a3d906d
	I0213 22:34:27.890583   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:27.890591   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:27.890598   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:27.890606   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:27.890614   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:27 GMT
	I0213 22:34:27.890971   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h5bvp","generateName":"kube-proxy-","namespace":"kube-system","uid":"d7a12109-66cd-41a9-b7e7-4e53a27a4ca7","resourceVersion":"801","creationTimestamp":"2024-02-13T22:20:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I0213 22:34:28.085838   32908 request.go:629] Waited for 194.398464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:28.085916   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:28.085928   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:28.085941   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:28.085955   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:28.088675   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:28.088699   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:28.088708   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:28.088716   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:28.088722   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:28.088730   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:28.088739   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:28 GMT
	I0213 22:34:28.088748   32908 round_trippers.go:580]     Audit-Id: b0ca719b-73d8-4d3c-89d8-801d75f20e4e
	I0213 22:34:28.089069   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:34:28.089369   32908 pod_ready.go:92] pod "kube-proxy-h5bvp" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:28.089384   32908 pod_ready.go:81] duration metric: took 398.354052ms waiting for pod "kube-proxy-h5bvp" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:28.089393   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:28.286508   32908 request.go:629] Waited for 197.050065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:34:28.286590   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k4ggx
	I0213 22:34:28.286595   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:28.286603   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:28.286609   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:28.289572   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:28.289593   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:28.289599   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:28.289605   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:28 GMT
	I0213 22:34:28.289610   32908 round_trippers.go:580]     Audit-Id: f0dd5664-f2a1-426d-a7fb-c434056754c0
	I0213 22:34:28.289614   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:28.289619   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:28.289624   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:28.289824   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-k4ggx","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9fa1c43-43a7-4737-8b10-e5327e355e9a","resourceVersion":"1205","creationTimestamp":"2024-02-13T22:22:09Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d9162890-8cea-4a86-bbfa-af52005e7a2f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:22:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9162890-8cea-4a86-bbfa-af52005e7a2f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0213 22:34:28.486618   32908 request.go:629] Waited for 196.355893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:34:28.486689   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653-m03
	I0213 22:34:28.486694   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:28.486702   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:28.486714   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:28.489512   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:28.489533   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:28.489544   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:28.489552   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:28.489559   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:28 GMT
	I0213 22:34:28.489569   32908 round_trippers.go:580]     Audit-Id: a3a6a37a-7094-46df-9e67-763e0ee5963a
	I0213 22:34:28.489578   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:28.489591   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:28.489734   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653-m03","uid":"1d8b0755-144d-410b-b11a-64be052ac069","resourceVersion":"1186","creationTimestamp":"2024-02-13T22:34:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_13T22_34_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:34:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0213 22:34:28.490035   32908 pod_ready.go:92] pod "kube-proxy-k4ggx" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:28.490053   32908 pod_ready.go:81] duration metric: took 400.654184ms waiting for pod "kube-proxy-k4ggx" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:28.490063   32908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:28.686502   32908 request.go:629] Waited for 196.365868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:34:28.686582   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-413653
	I0213 22:34:28.686590   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:28.686599   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:28.686609   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:28.689346   32908 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0213 22:34:28.689372   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:28.689380   32908 round_trippers.go:580]     Audit-Id: a262cc1f-b9f3-42a6-8f7f-12475d29508d
	I0213 22:34:28.689385   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:28.689390   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:28.689395   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:28.689400   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:28.689405   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:28 GMT
	I0213 22:34:28.689666   32908 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-413653","namespace":"kube-system","uid":"08710d51-793f-4606-9075-b5ab7331893e","resourceVersion":"861","creationTimestamp":"2024-02-13T22:20:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.mirror":"ea975223416cb6980630bbfbedf63235","kubernetes.io/config.seen":"2024-02-13T22:20:28.219616670Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-13T22:20:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I0213 22:34:28.886480   32908 request.go:629] Waited for 196.409422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:28.886560   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes/multinode-413653
	I0213 22:34:28.886567   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:28.886578   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:28.886588   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:28.890604   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:34:28.890627   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:28.890634   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:28.890639   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:28.890645   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:28.890650   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:28 GMT
	I0213 22:34:28.890655   32908 round_trippers.go:580]     Audit-Id: dab7a71a-2f7a-4daa-bbb5-75d02747aa2b
	I0213 22:34:28.890660   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:28.891303   32908 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-02-13T22:20:24Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I0213 22:34:28.891616   32908 pod_ready.go:92] pod "kube-scheduler-multinode-413653" in "kube-system" namespace has status "Ready":"True"
	I0213 22:34:28.891632   32908 pod_ready.go:81] duration metric: took 401.55815ms waiting for pod "kube-scheduler-multinode-413653" in "kube-system" namespace to be "Ready" ...
	I0213 22:34:28.891641   32908 pod_ready.go:38] duration metric: took 1.602820103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 22:34:28.891655   32908 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 22:34:28.891708   32908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:34:28.904704   32908 system_svc.go:56] duration metric: took 13.042926ms WaitForService to wait for kubelet.
	I0213 22:34:28.904730   32908 kubeadm.go:581] duration metric: took 1.636934228s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 22:34:28.904760   32908 node_conditions.go:102] verifying NodePressure condition ...
	I0213 22:34:29.086074   32908 request.go:629] Waited for 181.241838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.81:8443/api/v1/nodes
	I0213 22:34:29.086144   32908 round_trippers.go:463] GET https://192.168.39.81:8443/api/v1/nodes
	I0213 22:34:29.086150   32908 round_trippers.go:469] Request Headers:
	I0213 22:34:29.086160   32908 round_trippers.go:473]     Accept: application/json, */*
	I0213 22:34:29.086184   32908 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0213 22:34:29.090908   32908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0213 22:34:29.090933   32908 round_trippers.go:577] Response Headers:
	I0213 22:34:29.090944   32908 round_trippers.go:580]     Date: Tue, 13 Feb 2024 22:34:29 GMT
	I0213 22:34:29.090952   32908 round_trippers.go:580]     Audit-Id: 9dd32221-8eed-4288-90d7-f454f202f412
	I0213 22:34:29.090963   32908 round_trippers.go:580]     Cache-Control: no-cache, private
	I0213 22:34:29.090969   32908 round_trippers.go:580]     Content-Type: application/json
	I0213 22:34:29.090974   32908 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c73ea915-5a86-4831-919f-81247cc2e9d3
	I0213 22:34:29.090979   32908 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a0a4e352-0fa2-4ad7-b720-e8c7aef8f40b
	I0213 22:34:29.091746   32908 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1208"},"items":[{"metadata":{"name":"multinode-413653","uid":"0bc479e5-69d2-4e21-8ced-19a288f6bb5c","resourceVersion":"868","creationTimestamp":"2024-02-13T22:20:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-413653","kubernetes.io/os":"linux","minikube.k8s.io/commit":"613caefe13c19c397229c748a081b93da0bf2e2e","minikube.k8s.io/name":"multinode-413653","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_13T22_20_29_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16237 chars]
	I0213 22:34:29.092330   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:34:29.092352   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:34:29.092366   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:34:29.092372   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:34:29.092378   32908 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 22:34:29.092385   32908 node_conditions.go:123] node cpu capacity is 2
	I0213 22:34:29.092395   32908 node_conditions.go:105] duration metric: took 187.629466ms to run NodePressure ...
	I0213 22:34:29.092410   32908 start.go:228] waiting for startup goroutines ...
	I0213 22:34:29.092438   32908 start.go:242] writing updated cluster config ...
	I0213 22:34:29.092719   32908 ssh_runner.go:195] Run: rm -f paused
	I0213 22:34:29.145112   32908 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 22:34:29.147729   32908 out.go:177] * Done! kubectl is now configured to use "multinode-413653" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 22:30:11 UTC, ends at Tue 2024-02-13 22:34:30 UTC. --
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.318599988Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cbf31608-e738-496e-b6ce-c06df2805c6c name=/runtime.v1.RuntimeService/Version
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.320923631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4eccff8c-1560-4fb2-9de9-49e517cfd14c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.321326100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707863670321310948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4eccff8c-1560-4fb2-9de9-49e517cfd14c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.322410030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ccd865f-a2bb-431d-9971-fdbb5ad2c416 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.322558668Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ccd865f-a2bb-431d-9971-fdbb5ad2c416 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.322772835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ae3363eeefc44c0b201c9eca89b86358d68335ced21202d73a7aca1d5536d79,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707863479590371136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bf3f1c6a95ccdece0ad73aa4eaa480175819990f167d1d933150d8f1855b67,PodSandboxId:92fb6456c372f21234ea72002fd4fe9f4cdb25bf5e3a5d8c81459212c6af60aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1707863457955714354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-2lg9w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c452313-d5a8-4bba-85f7-0304f8d69a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 406388d9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fab91ae101557dd3aac530913f01f8166ae2ce8bb20fa7cab17dbd6d25d1e2c,PodSandboxId:b6b3afe68204b9f543be89cd3fab3ec4f96c2430bb814d214857b42fcd74e24f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707863456071018866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lq7xh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2543314d-46b0-490c-b0e1-74f4777913f9,},Annotations:map[string]string{io.kubernetes.container.hash: 81071fd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57595db66695445bb819688dff7edded599499b1b47f79d392c7cde8c56b4ecd,PodSandboxId:3301e26518e9955ba68496d57f7b0f49e6f2a6b1874dfd7f690b5a02f583ef1b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1707863450873261650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-shxmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1684b3fd-4115-4ab7-88d4-dc1c95680525,},Annotations:map[string]string{io.kubernetes.container.hash: 3471fd4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c36625827091cd1e2c6dd2acd57605ad14c45f3f2f51e50f5dcdb6d9da5730d,PodSandboxId:18691af980ce31b48f87025e7bba73481486ce5fb54a566f3a88da2e6b637d43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707863448730833529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5bvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a12109-66cd-41a9-b7e7-4e53a2
7a4ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 7fee5733,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05c3ae0954c9dea0faee9748de9fca0995507a837860d46b84987a52470408e4,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707863448759057040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c
4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066090152e6313b3ea3c8b56261b7c72d400fff9d11352539f5091f1c0c3d4ab,PodSandboxId:f8195398b934e1013de36ffdacc60fab7f941828a48e846d07891a8b507da25a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707863441965182051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea975223416cb6980630bbfbedf63235,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1f64815b1a7be099f9d87ae71d1ec5be8daeccb23a9b4021b1505c38b0383e,PodSandboxId:526ba9f26a7d83db4d2d823d2387d4b5d860c80e0184831cb767c61c1ff377ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707863442004942275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1228900b89b8f450a3daa0ff9995359c,},Annotations:map[string]string{io.kubernetes.container.has
h: 3f4250f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:893748aa249d7b8f93001fc84ca5e8a05a9bfefece7f70d7e125bfe0285103d9,PodSandboxId:07c9aba4e1f438ad4766648bdb5b3ea195a409a3dba498e6acfa23f73fb02204,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707863441520713717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ffed0771c7655c7c1ab2401f5bc8cd,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977f1062191e9a4f6d7078a7730be7f50791496377a32c762f91bedfa3fddb9e,PodSandboxId:cee324fffd8e3f173703ff89d415481780219e38331ff03d73abea9ddedf450f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707863441370684757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba523270c42e30a16923f778faad5a9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 6197474a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ccd865f-a2bb-431d-9971-fdbb5ad2c416 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.364593406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f7da567a-6550-453b-8df2-8ef64ee8652a name=/runtime.v1.RuntimeService/Version
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.364698337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f7da567a-6550-453b-8df2-8ef64ee8652a name=/runtime.v1.RuntimeService/Version
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.366682265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c138c4cb-061f-4d22-b476-f80f96fcc255 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.367062758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707863670367044471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c138c4cb-061f-4d22-b476-f80f96fcc255 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.367994981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0dcf8b89-42c5-472e-b8c9-cf246555fab8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.368103835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0dcf8b89-42c5-472e-b8c9-cf246555fab8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.368353405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ae3363eeefc44c0b201c9eca89b86358d68335ced21202d73a7aca1d5536d79,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707863479590371136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bf3f1c6a95ccdece0ad73aa4eaa480175819990f167d1d933150d8f1855b67,PodSandboxId:92fb6456c372f21234ea72002fd4fe9f4cdb25bf5e3a5d8c81459212c6af60aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1707863457955714354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-2lg9w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c452313-d5a8-4bba-85f7-0304f8d69a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 406388d9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fab91ae101557dd3aac530913f01f8166ae2ce8bb20fa7cab17dbd6d25d1e2c,PodSandboxId:b6b3afe68204b9f543be89cd3fab3ec4f96c2430bb814d214857b42fcd74e24f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707863456071018866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lq7xh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2543314d-46b0-490c-b0e1-74f4777913f9,},Annotations:map[string]string{io.kubernetes.container.hash: 81071fd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57595db66695445bb819688dff7edded599499b1b47f79d392c7cde8c56b4ecd,PodSandboxId:3301e26518e9955ba68496d57f7b0f49e6f2a6b1874dfd7f690b5a02f583ef1b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1707863450873261650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-shxmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1684b3fd-4115-4ab7-88d4-dc1c95680525,},Annotations:map[string]string{io.kubernetes.container.hash: 3471fd4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c36625827091cd1e2c6dd2acd57605ad14c45f3f2f51e50f5dcdb6d9da5730d,PodSandboxId:18691af980ce31b48f87025e7bba73481486ce5fb54a566f3a88da2e6b637d43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707863448730833529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5bvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a12109-66cd-41a9-b7e7-4e53a2
7a4ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 7fee5733,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05c3ae0954c9dea0faee9748de9fca0995507a837860d46b84987a52470408e4,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707863448759057040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c
4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066090152e6313b3ea3c8b56261b7c72d400fff9d11352539f5091f1c0c3d4ab,PodSandboxId:f8195398b934e1013de36ffdacc60fab7f941828a48e846d07891a8b507da25a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707863441965182051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea975223416cb6980630bbfbedf63235,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1f64815b1a7be099f9d87ae71d1ec5be8daeccb23a9b4021b1505c38b0383e,PodSandboxId:526ba9f26a7d83db4d2d823d2387d4b5d860c80e0184831cb767c61c1ff377ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707863442004942275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1228900b89b8f450a3daa0ff9995359c,},Annotations:map[string]string{io.kubernetes.container.has
h: 3f4250f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:893748aa249d7b8f93001fc84ca5e8a05a9bfefece7f70d7e125bfe0285103d9,PodSandboxId:07c9aba4e1f438ad4766648bdb5b3ea195a409a3dba498e6acfa23f73fb02204,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707863441520713717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ffed0771c7655c7c1ab2401f5bc8cd,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977f1062191e9a4f6d7078a7730be7f50791496377a32c762f91bedfa3fddb9e,PodSandboxId:cee324fffd8e3f173703ff89d415481780219e38331ff03d73abea9ddedf450f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707863441370684757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba523270c42e30a16923f778faad5a9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 6197474a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0dcf8b89-42c5-472e-b8c9-cf246555fab8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.376217879Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c34093c-3a8a-44d1-bb70-b1e57643d92a name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.376505275Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b6b3afe68204b9f543be89cd3fab3ec4f96c2430bb814d214857b42fcd74e24f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-lq7xh,Uid:2543314d-46b0-490c-b0e1-74f4777913f9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863455220327776,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-lq7xh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2543314d-46b0-490c-b0e1-74f4777913f9,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T22:30:47.347174775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92fb6456c372f21234ea72002fd4fe9f4cdb25bf5e3a5d8c81459212c6af60aa,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-2lg9w,Uid:5c452313-d5a8-4bba-85f7-0304f8d69a3b,Namespace:default,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863455203554000,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-2lg9w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c452313-d5a8-4bba-85f7-0304f8d69a3b,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T22:30:47.347173343Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3301e26518e9955ba68496d57f7b0f49e6f2a6b1874dfd7f690b5a02f583ef1b,Metadata:&PodSandboxMetadata{Name:kindnet-shxmz,Uid:1684b3fd-4115-4ab7-88d4-dc1c95680525,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863447730040125,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-shxmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1684b3fd-4115-4ab7-88d4-dc1c95680525,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-02-13T22:30:47.347179963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:aecede5e-5ae2-4239-b920-ab1af32c4d38,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863447725829725,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c4d38,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-13T22:30:47.347178866Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18691af980ce31b48f87025e7bba73481486ce5fb54a566f3a88da2e6b637d43,Metadata:&PodSandboxMetadata{Name:kube-proxy-h5bvp,Uid:d7a12109-66cd-41a9-b7e7-4e53a27a4ca7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863447678197131,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h5bvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a12109-66cd-41a9-b7e7-4e53a27a4ca7,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T22:30:47.347161449Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:526ba9f26a7d83db4d2d823d2387d4b5d860c80e0184831cb767c61c1ff377ad,Metadata:&PodSandboxMetadata{Name:etcd-multinode-413653,Uid:1228900b89b8f450a3daa0ff9995359c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863440867260913,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1228900b89b8f450a3daa0ff9995359c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.81:2379,kubernetes.io/config.hash: 1228900b89b8f450a3daa0ff9995359c,kubernetes.io/config.seen: 2024-02-13T22:30:40.346896421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cee324fffd8e3f173703ff89d415481780219e38331ff03d73abea9ddedf450f,Metada
ta:&PodSandboxMetadata{Name:kube-apiserver-multinode-413653,Uid:cba523270c42e30a16923f778faad5a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863440854981880,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba523270c42e30a16923f778faad5a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.81:8443,kubernetes.io/config.hash: cba523270c42e30a16923f778faad5a9,kubernetes.io/config.seen: 2024-02-13T22:30:40.346890733Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8195398b934e1013de36ffdacc60fab7f941828a48e846d07891a8b507da25a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-413653,Uid:ea975223416cb6980630bbfbedf63235,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863440847705776,Labels:map[string]string{compo
nent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea975223416cb6980630bbfbedf63235,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea975223416cb6980630bbfbedf63235,kubernetes.io/config.seen: 2024-02-13T22:30:40.346895463Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:07c9aba4e1f438ad4766648bdb5b3ea195a409a3dba498e6acfa23f73fb02204,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-413653,Uid:20ffed0771c7655c7c1ab2401f5bc8cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707863440844522474,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ffed0771c7655c7c1ab2401f5bc8cd,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: 20ffed0771c7655c7c1ab2401f5bc8cd,kubernetes.io/config.seen: 2024-02-13T22:30:40.346894484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=4c34093c-3a8a-44d1-bb70-b1e57643d92a name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.377564801Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a171a14c-cdf7-4e47-825b-eb9e711bbb31 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.377611912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a171a14c-cdf7-4e47-825b-eb9e711bbb31 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.377806015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ae3363eeefc44c0b201c9eca89b86358d68335ced21202d73a7aca1d5536d79,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707863479590371136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bf3f1c6a95ccdece0ad73aa4eaa480175819990f167d1d933150d8f1855b67,PodSandboxId:92fb6456c372f21234ea72002fd4fe9f4cdb25bf5e3a5d8c81459212c6af60aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1707863457955714354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-2lg9w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c452313-d5a8-4bba-85f7-0304f8d69a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 406388d9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fab91ae101557dd3aac530913f01f8166ae2ce8bb20fa7cab17dbd6d25d1e2c,PodSandboxId:b6b3afe68204b9f543be89cd3fab3ec4f96c2430bb814d214857b42fcd74e24f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707863456071018866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lq7xh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2543314d-46b0-490c-b0e1-74f4777913f9,},Annotations:map[string]string{io.kubernetes.container.hash: 81071fd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57595db66695445bb819688dff7edded599499b1b47f79d392c7cde8c56b4ecd,PodSandboxId:3301e26518e9955ba68496d57f7b0f49e6f2a6b1874dfd7f690b5a02f583ef1b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1707863450873261650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-shxmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1684b3fd-4115-4ab7-88d4-dc1c95680525,},Annotations:map[string]string{io.kubernetes.container.hash: 3471fd4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c36625827091cd1e2c6dd2acd57605ad14c45f3f2f51e50f5dcdb6d9da5730d,PodSandboxId:18691af980ce31b48f87025e7bba73481486ce5fb54a566f3a88da2e6b637d43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707863448730833529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5bvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a12109-66cd-41a9-b7e7-4e53a2
7a4ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 7fee5733,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066090152e6313b3ea3c8b56261b7c72d400fff9d11352539f5091f1c0c3d4ab,PodSandboxId:f8195398b934e1013de36ffdacc60fab7f941828a48e846d07891a8b507da25a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707863441965182051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea975223416cb6980630bbfbedf63235,},Ann
otations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1f64815b1a7be099f9d87ae71d1ec5be8daeccb23a9b4021b1505c38b0383e,PodSandboxId:526ba9f26a7d83db4d2d823d2387d4b5d860c80e0184831cb767c61c1ff377ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707863442004942275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1228900b89b8f450a3daa0ff9995359c,},Annotations:map[string]string{io.kubernetes.container.h
ash: 3f4250f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:893748aa249d7b8f93001fc84ca5e8a05a9bfefece7f70d7e125bfe0285103d9,PodSandboxId:07c9aba4e1f438ad4766648bdb5b3ea195a409a3dba498e6acfa23f73fb02204,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707863441520713717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ffed0771c7655c7c1ab2401f5bc8cd,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977f1062191e9a4f6d7078a7730be7f50791496377a32c762f91bedfa3fddb9e,PodSandboxId:cee324fffd8e3f173703ff89d415481780219e38331ff03d73abea9ddedf450f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707863441370684757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba523270c42e30a16923f778faad5a9,},Annotations:map[string]string{io.kubernetes
.container.hash: 6197474a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a171a14c-cdf7-4e47-825b-eb9e711bbb31 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.410933832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=30848a31-fdd4-41ab-b12d-fbdbb50016aa name=/runtime.v1.RuntimeService/Version
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.410991755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=30848a31-fdd4-41ab-b12d-fbdbb50016aa name=/runtime.v1.RuntimeService/Version
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.412096690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=779789e9-b0a4-4e24-9533-754a06269e99 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.412603833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707863670412589189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=779789e9-b0a4-4e24-9533-754a06269e99 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.413090474Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=57b4494e-7548-433e-ba9b-35429f7be399 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.413139405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=57b4494e-7548-433e-ba9b-35429f7be399 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 22:34:30 multinode-413653 crio[712]: time="2024-02-13 22:34:30.413343804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ae3363eeefc44c0b201c9eca89b86358d68335ced21202d73a7aca1d5536d79,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707863479590371136,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2bf3f1c6a95ccdece0ad73aa4eaa480175819990f167d1d933150d8f1855b67,PodSandboxId:92fb6456c372f21234ea72002fd4fe9f4cdb25bf5e3a5d8c81459212c6af60aa,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1707863457955714354,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-2lg9w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c452313-d5a8-4bba-85f7-0304f8d69a3b,},Annotations:map[string]string{io.kubernetes.container.hash: 406388d9,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fab91ae101557dd3aac530913f01f8166ae2ce8bb20fa7cab17dbd6d25d1e2c,PodSandboxId:b6b3afe68204b9f543be89cd3fab3ec4f96c2430bb814d214857b42fcd74e24f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707863456071018866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lq7xh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2543314d-46b0-490c-b0e1-74f4777913f9,},Annotations:map[string]string{io.kubernetes.container.hash: 81071fd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57595db66695445bb819688dff7edded599499b1b47f79d392c7cde8c56b4ecd,PodSandboxId:3301e26518e9955ba68496d57f7b0f49e6f2a6b1874dfd7f690b5a02f583ef1b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1707863450873261650,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-shxmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1684b3fd-4115-4ab7-88d4-dc1c95680525,},Annotations:map[string]string{io.kubernetes.container.hash: 3471fd4a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c36625827091cd1e2c6dd2acd57605ad14c45f3f2f51e50f5dcdb6d9da5730d,PodSandboxId:18691af980ce31b48f87025e7bba73481486ce5fb54a566f3a88da2e6b637d43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707863448730833529,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5bvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7a12109-66cd-41a9-b7e7-4e53a2
7a4ca7,},Annotations:map[string]string{io.kubernetes.container.hash: 7fee5733,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05c3ae0954c9dea0faee9748de9fca0995507a837860d46b84987a52470408e4,PodSandboxId:5ab87ad69be13d41a6a927cfc8d502384fdeb8a8340a8572fa61e7a4c5cdabc7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707863448759057040,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aecede5e-5ae2-4239-b920-ab1af32c
4d38,},Annotations:map[string]string{io.kubernetes.container.hash: 1be67f5c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:066090152e6313b3ea3c8b56261b7c72d400fff9d11352539f5091f1c0c3d4ab,PodSandboxId:f8195398b934e1013de36ffdacc60fab7f941828a48e846d07891a8b507da25a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707863441965182051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea975223416cb6980630bbfbedf63235,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1f64815b1a7be099f9d87ae71d1ec5be8daeccb23a9b4021b1505c38b0383e,PodSandboxId:526ba9f26a7d83db4d2d823d2387d4b5d860c80e0184831cb767c61c1ff377ad,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707863442004942275,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1228900b89b8f450a3daa0ff9995359c,},Annotations:map[string]string{io.kubernetes.container.has
h: 3f4250f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:893748aa249d7b8f93001fc84ca5e8a05a9bfefece7f70d7e125bfe0285103d9,PodSandboxId:07c9aba4e1f438ad4766648bdb5b3ea195a409a3dba498e6acfa23f73fb02204,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707863441520713717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20ffed0771c7655c7c1ab2401f5bc8cd,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977f1062191e9a4f6d7078a7730be7f50791496377a32c762f91bedfa3fddb9e,PodSandboxId:cee324fffd8e3f173703ff89d415481780219e38331ff03d73abea9ddedf450f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707863441370684757,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-413653,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba523270c42e30a16923f778faad5a9,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 6197474a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=57b4494e-7548-433e-ba9b-35429f7be399 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4ae3363eeefc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   5ab87ad69be13       storage-provisioner
	a2bf3f1c6a95c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   92fb6456c372f       busybox-5b5d89c9d6-2lg9w
	3fab91ae10155       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   b6b3afe68204b       coredns-5dd5756b68-lq7xh
	57595db666954       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   3301e26518e99       kindnet-shxmz
	05c3ae0954c9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   5ab87ad69be13       storage-provisioner
	8c36625827091       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   18691af980ce3       kube-proxy-h5bvp
	cf1f64815b1a7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   526ba9f26a7d8       etcd-multinode-413653
	066090152e631       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   f8195398b934e       kube-scheduler-multinode-413653
	893748aa249d7       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   07c9aba4e1f43       kube-controller-manager-multinode-413653
	977f1062191e9       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   cee324fffd8e3       kube-apiserver-multinode-413653
	
	
	==> coredns [3fab91ae101557dd3aac530913f01f8166ae2ce8bb20fa7cab17dbd6d25d1e2c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45888 - 47783 "HINFO IN 653766290235370856.7931244921594260999. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010039908s
	
	
	==> describe nodes <==
	Name:               multinode-413653
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-413653
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=multinode-413653
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T22_20_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:20:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-413653
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:34:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:31:16 +0000   Tue, 13 Feb 2024 22:20:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:31:16 +0000   Tue, 13 Feb 2024 22:20:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:31:16 +0000   Tue, 13 Feb 2024 22:20:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:31:16 +0000   Tue, 13 Feb 2024 22:30:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    multinode-413653
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 caeb7bcfb07e4c8bb75c3c598df34862
	  System UUID:                caeb7bcf-b07e-4c8b-b75c-3c598df34862
	  Boot ID:                    ca5bdb4d-b89a-4691-9c12-36d4523e829e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-2lg9w                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-lq7xh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-413653                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-shxmz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-413653             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-413653    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-h5bvp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-413653             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-413653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-413653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-413653 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-413653 event: Registered Node multinode-413653 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-413653 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-413653 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-413653 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-413653 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-413653 event: Registered Node multinode-413653 in Controller
	
	
	Name:               multinode-413653-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-413653-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=multinode-413653
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_13T22_34_26_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-413653-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 22:34:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:32:44 +0000   Tue, 13 Feb 2024 22:32:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:32:44 +0000   Tue, 13 Feb 2024 22:32:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:32:44 +0000   Tue, 13 Feb 2024 22:32:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:32:44 +0000   Tue, 13 Feb 2024 22:32:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    multinode-413653-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bcae8f44cab4679adc2f853fc07e0be
	  System UUID:                3bcae8f4-4cab-4679-adc2-f853fc07e0be
	  Boot ID:                    92e63f82-f8c9-4d57-b53a-3ca250cd7a32
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-5x76d    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-4m5lx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-26ww9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 104s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node multinode-413653-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node multinode-413653-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node multinode-413653-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet          Node multinode-413653-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                  kubelet          Node multinode-413653-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m12s (x2 over 3m12s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet          Node multinode-413653-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet          Node multinode-413653-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet          Node multinode-413653-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet          Node multinode-413653-m02 status is now: NodeReady
	  Normal   RegisteredNode           101s                   node-controller  Node multinode-413653-m02 event: Registered Node multinode-413653-m02 in Controller
	
	
	Name:               multinode-413653-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-413653-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=multinode-413653
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_13T22_34_26_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:34:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-413653-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 22:34:26 +0000   Tue, 13 Feb 2024 22:34:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 22:34:26 +0000   Tue, 13 Feb 2024 22:34:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 22:34:26 +0000   Tue, 13 Feb 2024 22:34:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 22:34:26 +0000   Tue, 13 Feb 2024 22:34:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    multinode-413653-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 212da2b594bf4cc88ac7efc36f61bdb2
	  System UUID:                212da2b5-94bf-4cc8-8ac7-efc36f61bdb2
	  Boot ID:                    c6d1fba2-eefe-4b73-a6bd-4d9444ffb15b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-xcg58    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-p2bqz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-k4ggx            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 2s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-413653-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-413653-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-413653-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-413653-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-413653-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-413653-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-413653-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-413653-m03 status is now: NodeReady
	  Normal   NodeNotReady             71s                 kubelet     Node multinode-413653-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)     kubelet     Node multinode-413653-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)     kubelet     Node multinode-413653-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                  kubelet     Node multinode-413653-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)     kubelet     Node multinode-413653-m03 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[Feb13 22:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068989] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.511720] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.500467] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149827] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.585612] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.044050] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.111273] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.158327] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.114972] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.229997] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.404795] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[ +19.061382] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [cf1f64815b1a7be099f9d87ae71d1ec5be8daeccb23a9b4021b1505c38b0383e] <==
	{"level":"info","ts":"2024-02-13T22:30:43.774796Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:30:43.774818Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-13T22:30:43.774984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 switched to configuration voters=(9364630335907098887)"}
	{"level":"info","ts":"2024-02-13T22:30:43.775028Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","added-peer-id":"81f5d9acb096f107","added-peer-peer-urls":["https://192.168.39.81:2380"]}
	{"level":"info","ts":"2024-02-13T22:30:43.775138Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a77bf2d9a9fbb59e","local-member-id":"81f5d9acb096f107","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:30:43.775169Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T22:30:43.78033Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-13T22:30:43.780612Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"81f5d9acb096f107","initial-advertise-peer-urls":["https://192.168.39.81:2380"],"listen-peer-urls":["https://192.168.39.81:2380"],"advertise-client-urls":["https://192.168.39.81:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.81:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-13T22:30:43.780664Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T22:30:43.780762Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-02-13T22:30:43.780769Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.81:2380"}
	{"level":"info","ts":"2024-02-13T22:30:44.854823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-13T22:30:44.854949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-13T22:30:44.855024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgPreVoteResp from 81f5d9acb096f107 at term 2"}
	{"level":"info","ts":"2024-02-13T22:30:44.855095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became candidate at term 3"}
	{"level":"info","ts":"2024-02-13T22:30:44.855142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 received MsgVoteResp from 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-02-13T22:30:44.85518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"81f5d9acb096f107 became leader at term 3"}
	{"level":"info","ts":"2024-02-13T22:30:44.855214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 81f5d9acb096f107 elected leader 81f5d9acb096f107 at term 3"}
	{"level":"info","ts":"2024-02-13T22:30:44.858032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:30:44.857978Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"81f5d9acb096f107","local-member-attributes":"{Name:multinode-413653 ClientURLs:[https://192.168.39.81:2379]}","request-path":"/0/members/81f5d9acb096f107/attributes","cluster-id":"a77bf2d9a9fbb59e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T22:30:44.859346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T22:30:44.859538Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T22:30:44.859595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T22:30:44.859072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T22:30:44.860939Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.81:2379"}
	
	
	==> kernel <==
	 22:34:30 up 4 min,  0 users,  load average: 0.10, 0.26, 0.13
	Linux multinode-413653 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [57595db66695445bb819688dff7edded599499b1b47f79d392c7cde8c56b4ecd] <==
	I0213 22:33:42.586250       1 main.go:250] Node multinode-413653-m03 has CIDR [10.244.3.0/24] 
	I0213 22:33:52.595034       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0213 22:33:52.595063       1 main.go:227] handling current node
	I0213 22:33:52.595076       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0213 22:33:52.595081       1 main.go:250] Node multinode-413653-m02 has CIDR [10.244.1.0/24] 
	I0213 22:33:52.595188       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0213 22:33:52.595224       1 main.go:250] Node multinode-413653-m03 has CIDR [10.244.3.0/24] 
	I0213 22:34:02.602505       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0213 22:34:02.602553       1 main.go:227] handling current node
	I0213 22:34:02.602575       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0213 22:34:02.602581       1 main.go:250] Node multinode-413653-m02 has CIDR [10.244.1.0/24] 
	I0213 22:34:02.602700       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0213 22:34:02.602732       1 main.go:250] Node multinode-413653-m03 has CIDR [10.244.3.0/24] 
	I0213 22:34:12.611129       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0213 22:34:12.611198       1 main.go:227] handling current node
	I0213 22:34:12.611220       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0213 22:34:12.611226       1 main.go:250] Node multinode-413653-m02 has CIDR [10.244.1.0/24] 
	I0213 22:34:12.611375       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0213 22:34:12.611380       1 main.go:250] Node multinode-413653-m03 has CIDR [10.244.3.0/24] 
	I0213 22:34:22.616964       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0213 22:34:22.617024       1 main.go:227] handling current node
	I0213 22:34:22.617036       1 main.go:223] Handling node with IPs: map[192.168.39.94:{}]
	I0213 22:34:22.617042       1 main.go:250] Node multinode-413653-m02 has CIDR [10.244.1.0/24] 
	I0213 22:34:22.617186       1 main.go:223] Handling node with IPs: map[192.168.39.178:{}]
	I0213 22:34:22.617224       1 main.go:250] Node multinode-413653-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [977f1062191e9a4f6d7078a7730be7f50791496377a32c762f91bedfa3fddb9e] <==
	I0213 22:30:46.322671       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0213 22:30:46.322687       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0213 22:30:46.322700       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0213 22:30:46.424289       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0213 22:30:46.458072       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0213 22:30:46.459508       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0213 22:30:46.459973       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0213 22:30:46.459991       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0213 22:30:46.460049       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0213 22:30:46.465114       1 shared_informer.go:318] Caches are synced for configmaps
	I0213 22:30:46.470412       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0213 22:30:46.471341       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0213 22:30:46.471365       1 aggregator.go:166] initial CRD sync complete...
	I0213 22:30:46.471370       1 autoregister_controller.go:141] Starting autoregister controller
	I0213 22:30:46.471374       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0213 22:30:46.471379       1 cache.go:39] Caches are synced for autoregister controller
	E0213 22:30:46.491598       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0213 22:30:47.271342       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0213 22:30:49.230146       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0213 22:30:49.399806       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0213 22:30:49.415797       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0213 22:30:49.523246       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0213 22:30:49.534858       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0213 22:30:59.082517       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0213 22:30:59.088644       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [893748aa249d7b8f93001fc84ca5e8a05a9bfefece7f70d7e125bfe0285103d9] <==
	I0213 22:32:44.232239       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-413653-m03"
	I0213 22:32:44.234876       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-w6ghx" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-w6ghx"
	I0213 22:32:44.244776       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-413653-m02" podCIDRs=["10.244.1.0/24"]
	I0213 22:32:44.390655       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-413653-m02"
	I0213 22:32:45.140162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="68.628µs"
	I0213 22:32:49.151030       1 event.go:307] "Event occurred" object="multinode-413653-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-413653-m02 event: Registered Node multinode-413653-m02 in Controller"
	I0213 22:32:58.421000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="98.088µs"
	I0213 22:32:59.021600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="182.954µs"
	I0213 22:32:59.027660       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="253.807µs"
	I0213 22:33:19.784820       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-413653-m02"
	I0213 22:34:22.405786       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-5x76d"
	I0213 22:34:22.414163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.9125ms"
	I0213 22:34:22.446033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="31.763539ms"
	I0213 22:34:22.446141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="44.602µs"
	I0213 22:34:22.446193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="19.796µs"
	I0213 22:34:22.463072       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.507µs"
	I0213 22:34:24.308554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.446761ms"
	I0213 22:34:24.308903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="133.47µs"
	I0213 22:34:25.419143       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-413653-m02"
	I0213 22:34:26.104309       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-413653-m03\" does not exist"
	I0213 22:34:26.104410       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-413653-m02"
	I0213 22:34:26.104838       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-xcg58" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-xcg58"
	I0213 22:34:26.128169       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-413653-m03" podCIDRs=["10.244.2.0/24"]
	I0213 22:34:26.266407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-413653-m02"
	I0213 22:34:27.068718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="73.321µs"
	
	
	==> kube-proxy [8c36625827091cd1e2c6dd2acd57605ad14c45f3f2f51e50f5dcdb6d9da5730d] <==
	I0213 22:30:49.178192       1 server_others.go:69] "Using iptables proxy"
	I0213 22:30:49.188584       1 node.go:141] Successfully retrieved node IP: 192.168.39.81
	I0213 22:30:49.303117       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 22:30:49.303368       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 22:30:49.311866       1 server_others.go:152] "Using iptables Proxier"
	I0213 22:30:49.312020       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 22:30:49.312202       1 server.go:846] "Version info" version="v1.28.4"
	I0213 22:30:49.312413       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:30:49.313074       1 config.go:188] "Starting service config controller"
	I0213 22:30:49.313135       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 22:30:49.313166       1 config.go:97] "Starting endpoint slice config controller"
	I0213 22:30:49.313181       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 22:30:49.313827       1 config.go:315] "Starting node config controller"
	I0213 22:30:49.313867       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 22:30:49.413930       1 shared_informer.go:318] Caches are synced for node config
	I0213 22:30:49.418653       1 shared_informer.go:318] Caches are synced for service config
	I0213 22:30:49.418683       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [066090152e6313b3ea3c8b56261b7c72d400fff9d11352539f5091f1c0c3d4ab] <==
	I0213 22:30:43.888757       1 serving.go:348] Generated self-signed cert in-memory
	W0213 22:30:46.358765       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0213 22:30:46.358890       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 22:30:46.358905       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 22:30:46.358912       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 22:30:46.428077       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0213 22:30:46.428167       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 22:30:46.430197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0213 22:30:46.430847       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0213 22:30:46.436510       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0213 22:30:46.430963       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0213 22:30:46.538619       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 22:30:11 UTC, ends at Tue 2024-02-13 22:34:31 UTC. --
	Feb 13 22:30:49 multinode-413653 kubelet[918]: E0213 22:30:49.035035     918 projected.go:198] Error preparing data for projected volume kube-api-access-rl4jg for pod default/busybox-5b5d89c9d6-2lg9w: object "default"/"kube-root-ca.crt" not registered
	Feb 13 22:30:49 multinode-413653 kubelet[918]: E0213 22:30:49.035085     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c452313-d5a8-4bba-85f7-0304f8d69a3b-kube-api-access-rl4jg podName:5c452313-d5a8-4bba-85f7-0304f8d69a3b nodeName:}" failed. No retries permitted until 2024-02-13 22:30:51.035072352 +0000 UTC m=+10.916823717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rl4jg" (UniqueName: "kubernetes.io/projected/5c452313-d5a8-4bba-85f7-0304f8d69a3b-kube-api-access-rl4jg") pod "busybox-5b5d89c9d6-2lg9w" (UID: "5c452313-d5a8-4bba-85f7-0304f8d69a3b") : object "default"/"kube-root-ca.crt" not registered
	Feb 13 22:30:49 multinode-413653 kubelet[918]: E0213 22:30:49.374683     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-lq7xh" podUID="2543314d-46b0-490c-b0e1-74f4777913f9"
	Feb 13 22:30:49 multinode-413653 kubelet[918]: E0213 22:30:49.374788     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-2lg9w" podUID="5c452313-d5a8-4bba-85f7-0304f8d69a3b"
	Feb 13 22:30:50 multinode-413653 kubelet[918]: E0213 22:30:50.950334     918 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 13 22:30:50 multinode-413653 kubelet[918]: E0213 22:30:50.950493     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2543314d-46b0-490c-b0e1-74f4777913f9-config-volume podName:2543314d-46b0-490c-b0e1-74f4777913f9 nodeName:}" failed. No retries permitted until 2024-02-13 22:30:54.950420972 +0000 UTC m=+14.832172336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2543314d-46b0-490c-b0e1-74f4777913f9-config-volume") pod "coredns-5dd5756b68-lq7xh" (UID: "2543314d-46b0-490c-b0e1-74f4777913f9") : object "kube-system"/"coredns" not registered
	Feb 13 22:30:51 multinode-413653 kubelet[918]: E0213 22:30:51.051709     918 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Feb 13 22:30:51 multinode-413653 kubelet[918]: E0213 22:30:51.051745     918 projected.go:198] Error preparing data for projected volume kube-api-access-rl4jg for pod default/busybox-5b5d89c9d6-2lg9w: object "default"/"kube-root-ca.crt" not registered
	Feb 13 22:30:51 multinode-413653 kubelet[918]: E0213 22:30:51.051797     918 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c452313-d5a8-4bba-85f7-0304f8d69a3b-kube-api-access-rl4jg podName:5c452313-d5a8-4bba-85f7-0304f8d69a3b nodeName:}" failed. No retries permitted until 2024-02-13 22:30:55.051783396 +0000 UTC m=+14.933534762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rl4jg" (UniqueName: "kubernetes.io/projected/5c452313-d5a8-4bba-85f7-0304f8d69a3b-kube-api-access-rl4jg") pod "busybox-5b5d89c9d6-2lg9w" (UID: "5c452313-d5a8-4bba-85f7-0304f8d69a3b") : object "default"/"kube-root-ca.crt" not registered
	Feb 13 22:30:51 multinode-413653 kubelet[918]: E0213 22:30:51.374495     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-lq7xh" podUID="2543314d-46b0-490c-b0e1-74f4777913f9"
	Feb 13 22:30:51 multinode-413653 kubelet[918]: E0213 22:30:51.374976     918 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-2lg9w" podUID="5c452313-d5a8-4bba-85f7-0304f8d69a3b"
	Feb 13 22:30:52 multinode-413653 kubelet[918]: I0213 22:30:52.902347     918 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Feb 13 22:31:19 multinode-413653 kubelet[918]: I0213 22:31:19.561276     918 scope.go:117] "RemoveContainer" containerID="05c3ae0954c9dea0faee9748de9fca0995507a837860d46b84987a52470408e4"
	Feb 13 22:31:40 multinode-413653 kubelet[918]: E0213 22:31:40.396817     918 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 22:31:40 multinode-413653 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 22:31:40 multinode-413653 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 22:31:40 multinode-413653 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 22:32:40 multinode-413653 kubelet[918]: E0213 22:32:40.402330     918 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 22:32:40 multinode-413653 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 22:32:40 multinode-413653 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 22:32:40 multinode-413653 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 22:33:40 multinode-413653 kubelet[918]: E0213 22:33:40.394675     918 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 22:33:40 multinode-413653 kubelet[918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 22:33:40 multinode-413653 kubelet[918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 22:33:40 multinode-413653 kubelet[918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-413653 -n multinode-413653
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-413653 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (690.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-413653 stop: exit status 82 (2m0.299307996s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-413653"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-413653 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-413653 status: exit status 3 (18.620612424s)

                                                
                                                
-- stdout --
	multinode-413653
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-413653-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 22:36:52.374392   35181 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host
	E0213 22:36:52.374429   35181 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-413653 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-413653 -n multinode-413653
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-413653 -n multinode-413653: exit status 3 (3.192223512s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 22:36:55.734290   35273 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host
	E0213 22:36:55.734316   35273 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-413653" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.11s)

                                                
                                    
x
+
TestPreload (281.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-555294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0213 22:47:03.710609   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:47:14.184855   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-555294 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m20.087850206s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-555294 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-555294 image pull gcr.io/k8s-minikube/busybox: (1.167573557s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-555294
E0213 22:49:11.137247   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:49:21.414044   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-555294: exit status 82 (2m0.28694129s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-555294"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-555294 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-02-13 22:49:38.447302099 +0000 UTC m=+3198.062076002
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-555294 -n test-preload-555294
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-555294 -n test-preload-555294: exit status 3 (18.571298279s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 22:49:57.014234   38278 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.95:22: connect: no route to host
	E0213 22:49:57.014258   38278 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.95:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-555294" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-555294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-555294
--- FAIL: TestPreload (281.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (139.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-245122 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-245122 --alsologtostderr -v=3: exit status 82 (2m0.780792118s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-245122"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:00:39.508657   47826 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:00:39.508818   47826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:00:39.508829   47826 out.go:304] Setting ErrFile to fd 2...
	I0213 23:00:39.508837   47826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:00:39.509047   47826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:00:39.509319   47826 out.go:298] Setting JSON to false
	I0213 23:00:39.509420   47826 mustload.go:65] Loading cluster: old-k8s-version-245122
	I0213 23:00:39.510562   47826 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:00:39.510693   47826 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/config.json ...
	I0213 23:00:39.510925   47826 mustload.go:65] Loading cluster: old-k8s-version-245122
	I0213 23:00:39.511342   47826 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:00:39.511390   47826 stop.go:39] StopHost: old-k8s-version-245122
	I0213 23:00:39.511898   47826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:00:39.511945   47826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:00:39.528774   47826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0213 23:00:39.529262   47826 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:00:39.529979   47826 main.go:141] libmachine: Using API Version  1
	I0213 23:00:39.530003   47826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:00:39.530375   47826 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:00:39.532699   47826 out.go:177] * Stopping node "old-k8s-version-245122"  ...
	I0213 23:00:39.533772   47826 main.go:141] libmachine: Stopping "old-k8s-version-245122"...
	I0213 23:00:39.533786   47826 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:00:40.008114   47826 main.go:141] libmachine: (old-k8s-version-245122) Calling .Stop
	I0213 23:00:40.012428   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 0/120
	I0213 23:00:41.014349   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 1/120
	I0213 23:00:42.016605   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 2/120
	I0213 23:00:43.018127   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 3/120
	I0213 23:00:44.020495   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 4/120
	I0213 23:00:45.022482   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 5/120
	I0213 23:00:46.024520   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 6/120
	I0213 23:00:47.026408   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 7/120
	I0213 23:00:48.028591   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 8/120
	I0213 23:00:49.030475   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 9/120
	I0213 23:00:50.031980   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 10/120
	I0213 23:00:51.033457   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 11/120
	I0213 23:00:52.035536   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 12/120
	I0213 23:00:53.037062   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 13/120
	I0213 23:00:54.038527   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 14/120
	I0213 23:00:55.040648   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 15/120
	I0213 23:00:56.042153   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 16/120
	I0213 23:00:57.043458   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 17/120
	I0213 23:00:58.044953   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 18/120
	I0213 23:00:59.046386   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 19/120
	I0213 23:01:00.048523   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 20/120
	I0213 23:01:01.050037   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 21/120
	I0213 23:01:02.052506   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 22/120
	I0213 23:01:03.054099   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 23/120
	I0213 23:01:04.056644   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 24/120
	I0213 23:01:05.058783   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 25/120
	I0213 23:01:06.060252   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 26/120
	I0213 23:01:07.061763   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 27/120
	I0213 23:01:08.064292   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 28/120
	I0213 23:01:09.065728   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 29/120
	I0213 23:01:10.067973   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 30/120
	I0213 23:01:11.069642   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 31/120
	I0213 23:01:12.071017   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 32/120
	I0213 23:01:13.072392   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 33/120
	I0213 23:01:14.073928   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 34/120
	I0213 23:01:15.075587   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 35/120
	I0213 23:01:16.077108   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 36/120
	I0213 23:01:17.078495   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 37/120
	I0213 23:01:18.080150   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 38/120
	I0213 23:01:19.081716   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 39/120
	I0213 23:01:20.084285   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 40/120
	I0213 23:01:21.086235   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 41/120
	I0213 23:01:22.087786   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 42/120
	I0213 23:01:23.089405   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 43/120
	I0213 23:01:24.091001   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 44/120
	I0213 23:01:25.093188   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 45/120
	I0213 23:01:26.094696   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 46/120
	I0213 23:01:27.096398   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 47/120
	I0213 23:01:28.097686   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 48/120
	I0213 23:01:29.099166   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 49/120
	I0213 23:01:30.100521   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 50/120
	I0213 23:01:31.102084   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 51/120
	I0213 23:01:32.103350   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 52/120
	I0213 23:01:33.105055   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 53/120
	I0213 23:01:34.106736   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 54/120
	I0213 23:01:35.108515   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 55/120
	I0213 23:01:36.110089   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 56/120
	I0213 23:01:37.112756   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 57/120
	I0213 23:01:38.114512   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 58/120
	I0213 23:01:39.116806   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 59/120
	I0213 23:01:40.119145   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 60/120
	I0213 23:01:41.121017   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 61/120
	I0213 23:01:42.123396   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 62/120
	I0213 23:01:43.124914   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 63/120
	I0213 23:01:44.126364   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 64/120
	I0213 23:01:45.128228   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 65/120
	I0213 23:01:46.130056   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 66/120
	I0213 23:01:47.132520   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 67/120
	I0213 23:01:48.134118   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 68/120
	I0213 23:01:49.135569   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 69/120
	I0213 23:01:50.137788   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 70/120
	I0213 23:01:51.139982   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 71/120
	I0213 23:01:52.141841   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 72/120
	I0213 23:01:53.143476   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 73/120
	I0213 23:01:54.144851   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 74/120
	I0213 23:01:55.147147   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 75/120
	I0213 23:01:56.148452   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 76/120
	I0213 23:01:57.149948   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 77/120
	I0213 23:01:58.151678   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 78/120
	I0213 23:01:59.153519   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 79/120
	I0213 23:02:00.155076   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 80/120
	I0213 23:02:01.156530   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 81/120
	I0213 23:02:02.157948   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 82/120
	I0213 23:02:03.159479   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 83/120
	I0213 23:02:04.160871   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 84/120
	I0213 23:02:05.162886   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 85/120
	I0213 23:02:06.164360   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 86/120
	I0213 23:02:07.165795   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 87/120
	I0213 23:02:08.167610   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 88/120
	I0213 23:02:09.170012   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 89/120
	I0213 23:02:10.172329   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 90/120
	I0213 23:02:11.173668   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 91/120
	I0213 23:02:12.175537   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 92/120
	I0213 23:02:13.176872   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 93/120
	I0213 23:02:14.178271   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 94/120
	I0213 23:02:15.180315   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 95/120
	I0213 23:02:16.181944   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 96/120
	I0213 23:02:17.183344   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 97/120
	I0213 23:02:18.185166   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 98/120
	I0213 23:02:19.186579   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 99/120
	I0213 23:02:20.187827   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 100/120
	I0213 23:02:21.189440   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 101/120
	I0213 23:02:22.191771   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 102/120
	I0213 23:02:23.193252   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 103/120
	I0213 23:02:24.194570   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 104/120
	I0213 23:02:25.196328   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 105/120
	I0213 23:02:26.198083   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 106/120
	I0213 23:02:27.200431   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 107/120
	I0213 23:02:28.202734   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 108/120
	I0213 23:02:29.204236   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 109/120
	I0213 23:02:30.205833   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 110/120
	I0213 23:02:31.207618   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 111/120
	I0213 23:02:32.209199   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 112/120
	I0213 23:02:33.210689   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 113/120
	I0213 23:02:34.212111   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 114/120
	I0213 23:02:35.214188   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 115/120
	I0213 23:02:36.215735   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 116/120
	I0213 23:02:37.216902   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 117/120
	I0213 23:02:38.218366   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 118/120
	I0213 23:02:39.219391   47826 main.go:141] libmachine: (old-k8s-version-245122) Waiting for machine to stop 119/120
	I0213 23:02:40.220815   47826 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0213 23:02:40.220901   47826 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0213 23:02:40.222834   47826 out.go:177] 
	W0213 23:02:40.224194   47826 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0213 23:02:40.224210   47826 out.go:239] * 
	* 
	W0213 23:02:40.227139   47826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 23:02:40.228515   47826 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-245122 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122: exit status 3 (18.608708968s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:02:58.838183   48764 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	E0213 23:02:58.838205   48764 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-245122" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (139.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-778731 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-778731 --alsologtostderr -v=3: exit status 82 (2m0.318163438s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-778731"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:00:46.436221   48090 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:00:46.436342   48090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:00:46.436352   48090 out.go:304] Setting ErrFile to fd 2...
	I0213 23:00:46.436359   48090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:00:46.436587   48090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:00:46.436880   48090 out.go:298] Setting JSON to false
	I0213 23:00:46.436966   48090 mustload.go:65] Loading cluster: no-preload-778731
	I0213 23:00:46.437354   48090 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:00:46.437439   48090 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/config.json ...
	I0213 23:00:46.437619   48090 mustload.go:65] Loading cluster: no-preload-778731
	I0213 23:00:46.437758   48090 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:00:46.437805   48090 stop.go:39] StopHost: no-preload-778731
	I0213 23:00:46.438245   48090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:00:46.438308   48090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:00:46.455321   48090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39573
	I0213 23:00:46.455859   48090 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:00:46.456464   48090 main.go:141] libmachine: Using API Version  1
	I0213 23:00:46.456483   48090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:00:46.456852   48090 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:00:46.458889   48090 out.go:177] * Stopping node "no-preload-778731"  ...
	I0213 23:00:46.460840   48090 main.go:141] libmachine: Stopping "no-preload-778731"...
	I0213 23:00:46.460860   48090 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:00:46.462785   48090 main.go:141] libmachine: (no-preload-778731) Calling .Stop
	I0213 23:00:46.466405   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 0/120
	I0213 23:00:47.468684   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 1/120
	I0213 23:00:48.470138   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 2/120
	I0213 23:00:49.472442   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 3/120
	I0213 23:00:50.474502   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 4/120
	I0213 23:00:51.477115   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 5/120
	I0213 23:00:52.478886   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 6/120
	I0213 23:00:53.480426   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 7/120
	I0213 23:00:54.482675   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 8/120
	I0213 23:00:55.484352   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 9/120
	I0213 23:00:56.486687   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 10/120
	I0213 23:00:57.489109   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 11/120
	I0213 23:00:58.490526   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 12/120
	I0213 23:00:59.492462   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 13/120
	I0213 23:01:00.494783   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 14/120
	I0213 23:01:01.496702   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 15/120
	I0213 23:01:02.498110   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 16/120
	I0213 23:01:03.500571   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 17/120
	I0213 23:01:04.501981   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 18/120
	I0213 23:01:05.503298   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 19/120
	I0213 23:01:06.505803   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 20/120
	I0213 23:01:07.507795   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 21/120
	I0213 23:01:08.509172   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 22/120
	I0213 23:01:09.511150   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 23/120
	I0213 23:01:10.512571   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 24/120
	I0213 23:01:11.514655   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 25/120
	I0213 23:01:12.516643   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 26/120
	I0213 23:01:13.518178   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 27/120
	I0213 23:01:14.520622   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 28/120
	I0213 23:01:15.521983   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 29/120
	I0213 23:01:16.524036   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 30/120
	I0213 23:01:17.525313   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 31/120
	I0213 23:01:18.527002   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 32/120
	I0213 23:01:19.528518   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 33/120
	I0213 23:01:20.530017   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 34/120
	I0213 23:01:21.532269   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 35/120
	I0213 23:01:22.533811   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 36/120
	I0213 23:01:23.535206   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 37/120
	I0213 23:01:24.536498   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 38/120
	I0213 23:01:25.537952   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 39/120
	I0213 23:01:26.540518   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 40/120
	I0213 23:01:27.542131   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 41/120
	I0213 23:01:28.544423   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 42/120
	I0213 23:01:29.545928   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 43/120
	I0213 23:01:30.547502   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 44/120
	I0213 23:01:31.549607   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 45/120
	I0213 23:01:32.551065   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 46/120
	I0213 23:01:33.552843   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 47/120
	I0213 23:01:34.554425   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 48/120
	I0213 23:01:35.556590   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 49/120
	I0213 23:01:36.558965   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 50/120
	I0213 23:01:37.560986   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 51/120
	I0213 23:01:38.562091   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 52/120
	I0213 23:01:39.564511   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 53/120
	I0213 23:01:40.566118   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 54/120
	I0213 23:01:41.568230   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 55/120
	I0213 23:01:42.570546   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 56/120
	I0213 23:01:43.572682   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 57/120
	I0213 23:01:44.574101   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 58/120
	I0213 23:01:45.576527   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 59/120
	I0213 23:01:46.579005   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 60/120
	I0213 23:01:47.580501   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 61/120
	I0213 23:01:48.582294   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 62/120
	I0213 23:01:49.583838   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 63/120
	I0213 23:01:50.585130   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 64/120
	I0213 23:01:51.587346   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 65/120
	I0213 23:01:52.589039   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 66/120
	I0213 23:01:53.590431   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 67/120
	I0213 23:01:54.592602   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 68/120
	I0213 23:01:55.594009   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 69/120
	I0213 23:01:56.596018   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 70/120
	I0213 23:01:57.597521   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 71/120
	I0213 23:01:58.598946   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 72/120
	I0213 23:01:59.600620   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 73/120
	I0213 23:02:00.601929   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 74/120
	I0213 23:02:01.604194   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 75/120
	I0213 23:02:02.605620   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 76/120
	I0213 23:02:03.607118   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 77/120
	I0213 23:02:04.608472   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 78/120
	I0213 23:02:05.610024   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 79/120
	I0213 23:02:06.612406   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 80/120
	I0213 23:02:07.613961   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 81/120
	I0213 23:02:08.615256   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 82/120
	I0213 23:02:09.616520   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 83/120
	I0213 23:02:10.618749   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 84/120
	I0213 23:02:11.620828   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 85/120
	I0213 23:02:12.622450   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 86/120
	I0213 23:02:13.623919   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 87/120
	I0213 23:02:14.625289   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 88/120
	I0213 23:02:15.626979   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 89/120
	I0213 23:02:16.629168   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 90/120
	I0213 23:02:17.630749   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 91/120
	I0213 23:02:18.632694   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 92/120
	I0213 23:02:19.634942   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 93/120
	I0213 23:02:20.636204   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 94/120
	I0213 23:02:21.637659   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 95/120
	I0213 23:02:22.639035   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 96/120
	I0213 23:02:23.640750   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 97/120
	I0213 23:02:24.641955   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 98/120
	I0213 23:02:25.643600   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 99/120
	I0213 23:02:26.645496   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 100/120
	I0213 23:02:27.647039   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 101/120
	I0213 23:02:28.648417   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 102/120
	I0213 23:02:29.649773   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 103/120
	I0213 23:02:30.651156   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 104/120
	I0213 23:02:31.652959   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 105/120
	I0213 23:02:32.654348   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 106/120
	I0213 23:02:33.655810   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 107/120
	I0213 23:02:34.657403   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 108/120
	I0213 23:02:35.658868   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 109/120
	I0213 23:02:36.660999   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 110/120
	I0213 23:02:37.662828   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 111/120
	I0213 23:02:38.664158   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 112/120
	I0213 23:02:39.665594   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 113/120
	I0213 23:02:40.666899   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 114/120
	I0213 23:02:41.668889   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 115/120
	I0213 23:02:42.670213   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 116/120
	I0213 23:02:43.671824   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 117/120
	I0213 23:02:44.673494   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 118/120
	I0213 23:02:45.675007   48090 main.go:141] libmachine: (no-preload-778731) Waiting for machine to stop 119/120
	I0213 23:02:46.676458   48090 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0213 23:02:46.676522   48090 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0213 23:02:46.678756   48090 out.go:177] 
	W0213 23:02:46.680436   48090 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0213 23:02:46.680455   48090 out.go:239] * 
	* 
	W0213 23:02:46.683525   48090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 23:02:46.684934   48090 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-778731 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731: exit status 3 (18.551815588s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:03:05.238301   48815 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host
	E0213 23:03:05.238321   48815 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-778731" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-340656 --alsologtostderr -v=3
E0213 23:02:03.709567   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-340656 --alsologtostderr -v=3: exit status 82 (2m0.295757004s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-340656"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:01:43.322785   48479 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:01:43.322972   48479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:43.322990   48479 out.go:304] Setting ErrFile to fd 2...
	I0213 23:01:43.322999   48479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:01:43.323284   48479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:01:43.323596   48479 out.go:298] Setting JSON to false
	I0213 23:01:43.323699   48479 mustload.go:65] Loading cluster: embed-certs-340656
	I0213 23:01:43.324299   48479 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:01:43.324438   48479 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/config.json ...
	I0213 23:01:43.324720   48479 mustload.go:65] Loading cluster: embed-certs-340656
	I0213 23:01:43.324905   48479 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:01:43.324968   48479 stop.go:39] StopHost: embed-certs-340656
	I0213 23:01:43.325484   48479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:01:43.325556   48479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:01:43.339988   48479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I0213 23:01:43.340453   48479 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:01:43.341012   48479 main.go:141] libmachine: Using API Version  1
	I0213 23:01:43.341038   48479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:01:43.341397   48479 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:01:43.344142   48479 out.go:177] * Stopping node "embed-certs-340656"  ...
	I0213 23:01:43.345592   48479 main.go:141] libmachine: Stopping "embed-certs-340656"...
	I0213 23:01:43.345612   48479 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:01:43.347789   48479 main.go:141] libmachine: (embed-certs-340656) Calling .Stop
	I0213 23:01:43.351545   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 0/120
	I0213 23:01:44.353979   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 1/120
	I0213 23:01:45.355266   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 2/120
	I0213 23:01:46.356893   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 3/120
	I0213 23:01:47.358114   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 4/120
	I0213 23:01:48.360150   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 5/120
	I0213 23:01:49.361608   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 6/120
	I0213 23:01:50.363046   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 7/120
	I0213 23:01:51.364660   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 8/120
	I0213 23:01:52.366131   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 9/120
	I0213 23:01:53.368404   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 10/120
	I0213 23:01:54.370022   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 11/120
	I0213 23:01:55.371400   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 12/120
	I0213 23:01:56.372676   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 13/120
	I0213 23:01:57.374151   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 14/120
	I0213 23:01:58.375915   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 15/120
	I0213 23:01:59.377257   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 16/120
	I0213 23:02:00.378777   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 17/120
	I0213 23:02:01.379985   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 18/120
	I0213 23:02:02.381486   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 19/120
	I0213 23:02:03.383650   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 20/120
	I0213 23:02:04.384925   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 21/120
	I0213 23:02:05.386461   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 22/120
	I0213 23:02:06.387940   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 23/120
	I0213 23:02:07.389454   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 24/120
	I0213 23:02:08.391160   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 25/120
	I0213 23:02:09.392532   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 26/120
	I0213 23:02:10.394094   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 27/120
	I0213 23:02:11.396300   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 28/120
	I0213 23:02:12.397670   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 29/120
	I0213 23:02:13.399285   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 30/120
	I0213 23:02:14.400867   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 31/120
	I0213 23:02:15.402255   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 32/120
	I0213 23:02:16.403802   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 33/120
	I0213 23:02:17.405498   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 34/120
	I0213 23:02:18.407805   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 35/120
	I0213 23:02:19.409384   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 36/120
	I0213 23:02:20.410909   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 37/120
	I0213 23:02:21.412351   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 38/120
	I0213 23:02:22.414034   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 39/120
	I0213 23:02:23.416312   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 40/120
	I0213 23:02:24.418066   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 41/120
	I0213 23:02:25.419412   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 42/120
	I0213 23:02:26.421587   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 43/120
	I0213 23:02:27.422951   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 44/120
	I0213 23:02:28.425018   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 45/120
	I0213 23:02:29.426468   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 46/120
	I0213 23:02:30.428547   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 47/120
	I0213 23:02:31.429920   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 48/120
	I0213 23:02:32.431181   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 49/120
	I0213 23:02:33.433362   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 50/120
	I0213 23:02:34.434701   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 51/120
	I0213 23:02:35.436078   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 52/120
	I0213 23:02:36.438979   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 53/120
	I0213 23:02:37.440365   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 54/120
	I0213 23:02:38.442627   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 55/120
	I0213 23:02:39.444135   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 56/120
	I0213 23:02:40.445632   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 57/120
	I0213 23:02:41.446971   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 58/120
	I0213 23:02:42.448268   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 59/120
	I0213 23:02:43.450508   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 60/120
	I0213 23:02:44.451778   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 61/120
	I0213 23:02:45.453038   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 62/120
	I0213 23:02:46.454638   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 63/120
	I0213 23:02:47.456071   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 64/120
	I0213 23:02:48.458253   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 65/120
	I0213 23:02:49.459545   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 66/120
	I0213 23:02:50.460866   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 67/120
	I0213 23:02:51.462323   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 68/120
	I0213 23:02:52.463643   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 69/120
	I0213 23:02:53.465961   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 70/120
	I0213 23:02:54.467203   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 71/120
	I0213 23:02:55.468705   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 72/120
	I0213 23:02:56.470266   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 73/120
	I0213 23:02:57.471656   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 74/120
	I0213 23:02:58.473778   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 75/120
	I0213 23:02:59.475280   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 76/120
	I0213 23:03:00.476935   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 77/120
	I0213 23:03:01.478706   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 78/120
	I0213 23:03:02.480271   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 79/120
	I0213 23:03:03.482690   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 80/120
	I0213 23:03:04.484464   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 81/120
	I0213 23:03:05.486115   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 82/120
	I0213 23:03:06.487755   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 83/120
	I0213 23:03:07.489264   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 84/120
	I0213 23:03:08.491287   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 85/120
	I0213 23:03:09.492737   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 86/120
	I0213 23:03:10.494170   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 87/120
	I0213 23:03:11.495583   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 88/120
	I0213 23:03:12.497048   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 89/120
	I0213 23:03:13.498574   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 90/120
	I0213 23:03:14.500030   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 91/120
	I0213 23:03:15.501499   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 92/120
	I0213 23:03:16.503092   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 93/120
	I0213 23:03:17.504482   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 94/120
	I0213 23:03:18.506678   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 95/120
	I0213 23:03:19.508546   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 96/120
	I0213 23:03:20.509902   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 97/120
	I0213 23:03:21.511485   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 98/120
	I0213 23:03:22.512894   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 99/120
	I0213 23:03:23.515159   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 100/120
	I0213 23:03:24.516370   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 101/120
	I0213 23:03:25.517818   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 102/120
	I0213 23:03:26.519119   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 103/120
	I0213 23:03:27.520474   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 104/120
	I0213 23:03:28.522560   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 105/120
	I0213 23:03:29.523992   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 106/120
	I0213 23:03:30.525467   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 107/120
	I0213 23:03:31.526772   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 108/120
	I0213 23:03:32.528188   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 109/120
	I0213 23:03:33.529489   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 110/120
	I0213 23:03:34.530832   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 111/120
	I0213 23:03:35.532463   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 112/120
	I0213 23:03:36.533967   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 113/120
	I0213 23:03:37.535399   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 114/120
	I0213 23:03:38.537556   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 115/120
	I0213 23:03:39.538955   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 116/120
	I0213 23:03:40.540521   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 117/120
	I0213 23:03:41.541887   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 118/120
	I0213 23:03:42.543470   48479 main.go:141] libmachine: (embed-certs-340656) Waiting for machine to stop 119/120
	I0213 23:03:43.544386   48479 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0213 23:03:43.544453   48479 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0213 23:03:43.546506   48479 out.go:177] 
	W0213 23:03:43.547987   48479 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0213 23:03:43.548000   48479 out.go:239] * 
	* 
	W0213 23:03:43.551187   48479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 23:03:43.552716   48479 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-340656 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656
E0213 23:03:54.185788   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656: exit status 3 (18.515583922s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:04:02.070227   49247 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host
	E0213 23:04:02.070256   49247 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-340656" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-083863 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-083863 --alsologtostderr -v=3: exit status 82 (2m0.291612115s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-083863"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 23:02:31.367654   48724 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:02:31.367836   48724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:02:31.367849   48724 out.go:304] Setting ErrFile to fd 2...
	I0213 23:02:31.367856   48724 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:02:31.368076   48724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:02:31.368375   48724 out.go:298] Setting JSON to false
	I0213 23:02:31.368478   48724 mustload.go:65] Loading cluster: default-k8s-diff-port-083863
	I0213 23:02:31.368862   48724 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:02:31.368947   48724 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:02:31.369135   48724 mustload.go:65] Loading cluster: default-k8s-diff-port-083863
	I0213 23:02:31.369288   48724 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:02:31.369331   48724 stop.go:39] StopHost: default-k8s-diff-port-083863
	I0213 23:02:31.369767   48724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:02:31.369832   48724 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:02:31.385050   48724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33743
	I0213 23:02:31.385528   48724 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:02:31.386203   48724 main.go:141] libmachine: Using API Version  1
	I0213 23:02:31.386228   48724 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:02:31.386655   48724 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:02:31.388610   48724 out.go:177] * Stopping node "default-k8s-diff-port-083863"  ...
	I0213 23:02:31.390396   48724 main.go:141] libmachine: Stopping "default-k8s-diff-port-083863"...
	I0213 23:02:31.390433   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:02:31.392565   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Stop
	I0213 23:02:31.396028   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 0/120
	I0213 23:02:32.397477   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 1/120
	I0213 23:02:33.399017   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 2/120
	I0213 23:02:34.400563   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 3/120
	I0213 23:02:35.402115   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 4/120
	I0213 23:02:36.404359   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 5/120
	I0213 23:02:37.405645   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 6/120
	I0213 23:02:38.407136   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 7/120
	I0213 23:02:39.408395   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 8/120
	I0213 23:02:40.409810   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 9/120
	I0213 23:02:41.411351   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 10/120
	I0213 23:02:42.412881   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 11/120
	I0213 23:02:43.414257   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 12/120
	I0213 23:02:44.415625   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 13/120
	I0213 23:02:45.417173   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 14/120
	I0213 23:02:46.419436   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 15/120
	I0213 23:02:47.421246   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 16/120
	I0213 23:02:48.422689   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 17/120
	I0213 23:02:49.424123   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 18/120
	I0213 23:02:50.425611   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 19/120
	I0213 23:02:51.427393   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 20/120
	I0213 23:02:52.428777   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 21/120
	I0213 23:02:53.430300   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 22/120
	I0213 23:02:54.431646   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 23/120
	I0213 23:02:55.433328   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 24/120
	I0213 23:02:56.435341   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 25/120
	I0213 23:02:57.436815   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 26/120
	I0213 23:02:58.438352   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 27/120
	I0213 23:02:59.439846   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 28/120
	I0213 23:03:00.441234   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 29/120
	I0213 23:03:01.442404   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 30/120
	I0213 23:03:02.444418   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 31/120
	I0213 23:03:03.445780   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 32/120
	I0213 23:03:04.448365   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 33/120
	I0213 23:03:05.449923   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 34/120
	I0213 23:03:06.452081   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 35/120
	I0213 23:03:07.453983   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 36/120
	I0213 23:03:08.455063   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 37/120
	I0213 23:03:09.456562   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 38/120
	I0213 23:03:10.457996   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 39/120
	I0213 23:03:11.460229   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 40/120
	I0213 23:03:12.461700   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 41/120
	I0213 23:03:13.463077   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 42/120
	I0213 23:03:14.464449   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 43/120
	I0213 23:03:15.466040   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 44/120
	I0213 23:03:16.468073   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 45/120
	I0213 23:03:17.469760   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 46/120
	I0213 23:03:18.471209   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 47/120
	I0213 23:03:19.472806   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 48/120
	I0213 23:03:20.474260   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 49/120
	I0213 23:03:21.476498   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 50/120
	I0213 23:03:22.477941   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 51/120
	I0213 23:03:23.479362   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 52/120
	I0213 23:03:24.480888   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 53/120
	I0213 23:03:25.482580   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 54/120
	I0213 23:03:26.484528   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 55/120
	I0213 23:03:27.485935   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 56/120
	I0213 23:03:28.487435   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 57/120
	I0213 23:03:29.489110   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 58/120
	I0213 23:03:30.490609   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 59/120
	I0213 23:03:31.492961   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 60/120
	I0213 23:03:32.494613   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 61/120
	I0213 23:03:33.496101   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 62/120
	I0213 23:03:34.497502   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 63/120
	I0213 23:03:35.498972   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 64/120
	I0213 23:03:36.500936   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 65/120
	I0213 23:03:37.502435   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 66/120
	I0213 23:03:38.504268   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 67/120
	I0213 23:03:39.505890   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 68/120
	I0213 23:03:40.507387   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 69/120
	I0213 23:03:41.508748   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 70/120
	I0213 23:03:42.510396   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 71/120
	I0213 23:03:43.512014   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 72/120
	I0213 23:03:44.513395   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 73/120
	I0213 23:03:45.514919   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 74/120
	I0213 23:03:46.517067   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 75/120
	I0213 23:03:47.518421   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 76/120
	I0213 23:03:48.520297   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 77/120
	I0213 23:03:49.522054   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 78/120
	I0213 23:03:50.523629   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 79/120
	I0213 23:03:51.526221   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 80/120
	I0213 23:03:52.527574   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 81/120
	I0213 23:03:53.528946   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 82/120
	I0213 23:03:54.530292   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 83/120
	I0213 23:03:55.531855   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 84/120
	I0213 23:03:56.533846   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 85/120
	I0213 23:03:57.535292   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 86/120
	I0213 23:03:58.536683   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 87/120
	I0213 23:03:59.538021   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 88/120
	I0213 23:04:00.539383   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 89/120
	I0213 23:04:01.541541   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 90/120
	I0213 23:04:02.543084   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 91/120
	I0213 23:04:03.544438   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 92/120
	I0213 23:04:04.545978   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 93/120
	I0213 23:04:05.547377   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 94/120
	I0213 23:04:06.549801   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 95/120
	I0213 23:04:07.551529   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 96/120
	I0213 23:04:08.552936   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 97/120
	I0213 23:04:09.554401   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 98/120
	I0213 23:04:10.555798   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 99/120
	I0213 23:04:11.557187   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 100/120
	I0213 23:04:12.558643   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 101/120
	I0213 23:04:13.560330   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 102/120
	I0213 23:04:14.561858   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 103/120
	I0213 23:04:15.563507   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 104/120
	I0213 23:04:16.566043   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 105/120
	I0213 23:04:17.567587   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 106/120
	I0213 23:04:18.569203   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 107/120
	I0213 23:04:19.570895   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 108/120
	I0213 23:04:20.572488   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 109/120
	I0213 23:04:21.574895   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 110/120
	I0213 23:04:22.576604   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 111/120
	I0213 23:04:23.578179   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 112/120
	I0213 23:04:24.580322   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 113/120
	I0213 23:04:25.582059   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 114/120
	I0213 23:04:26.584220   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 115/120
	I0213 23:04:27.585647   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 116/120
	I0213 23:04:28.587217   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 117/120
	I0213 23:04:29.588744   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 118/120
	I0213 23:04:30.590581   48724 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for machine to stop 119/120
	I0213 23:04:31.591814   48724 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0213 23:04:31.591878   48724 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0213 23:04:31.593974   48724 out.go:177] 
	W0213 23:04:31.595583   48724 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0213 23:04:31.595602   48724 out.go:239] * 
	* 
	W0213 23:04:31.598592   48724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0213 23:04:31.599987   48724 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-083863 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863: exit status 3 (18.596809825s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:04:50.198216   49522 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E0213 23:04:50.198245   49522 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-083863" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122: exit status 3 (3.167689545s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:03:02.006213   48867 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	E0213 23:03:02.006235   48867 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-245122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-245122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153095846s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-245122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122: exit status 3 (3.062627852s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:03:11.222371   48977 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host
	E0213 23:03:11.222402   48977 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-245122" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731: exit status 3 (3.200025826s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:03:08.438222   48947 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host
	E0213 23:03:08.438242   48947 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-778731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-778731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152765772s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-778731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731: exit status 3 (3.062937777s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:03:17.654331   49088 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host
	E0213 23:03:17.654360   49088 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.31:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-778731" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656: exit status 3 (3.200075343s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:04:05.270281   49341 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host
	E0213 23:04:05.270303   49341 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-340656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0213 23:04:11.138016   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-340656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153640382s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-340656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656: exit status 3 (3.062024727s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:04:14.486286   49413 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host
	E0213 23:04:14.486311   49413 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.56:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-340656" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863: exit status 3 (3.167927524s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:04:53.366316   49615 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E0213 23:04:53.366337   49615 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-083863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-083863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153826958s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-083863 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863: exit status 3 (3.061988294s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0213 23:05:02.582407   49674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host
	E0213 23:05:02.582429   49674 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-083863" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0213 23:12:03.709561   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 23:14:11.137649   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 23:14:21.414184   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 23:15:44.463640   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 23:17:03.710495   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-245122 -n old-k8s-version-245122
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:19:26.012492094 +0000 UTC m=+4985.627265999
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-245122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-245122 logs -n 25: (1.77937564s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 23:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-245122        | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:05:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:05:02.640377   49715 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:05:02.640501   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640509   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:05:02.640513   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640736   49715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:05:02.641321   49715 out.go:298] Setting JSON to false
	I0213 23:05:02.642273   49715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6454,"bootTime":1707859049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:05:02.642347   49715 start.go:138] virtualization: kvm guest
	I0213 23:05:02.645098   49715 out.go:177] * [default-k8s-diff-port-083863] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:05:02.646964   49715 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:05:02.646970   49715 notify.go:220] Checking for updates...
	I0213 23:05:02.648511   49715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:05:02.650105   49715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:05:02.651715   49715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:05:02.653359   49715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:05:02.655095   49715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:05:02.657048   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:05:02.657426   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.657495   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.672324   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0213 23:05:02.672730   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.673260   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.673290   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.673647   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.673817   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.674096   49715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:05:02.674432   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.674472   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.688915   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0213 23:05:02.689349   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.689790   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.689816   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.690223   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.690421   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.727324   49715 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 23:05:02.728797   49715 start.go:298] selected driver: kvm2
	I0213 23:05:02.728815   49715 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.728927   49715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:05:02.729600   49715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.729674   49715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:05:02.745692   49715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:05:02.746106   49715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:05:02.746172   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:05:02.746187   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:05:02.746199   49715 start_flags.go:321] config:
	{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-08386
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.746779   49715 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.748860   49715 out.go:177] * Starting control plane node default-k8s-diff-port-083863 in cluster default-k8s-diff-port-083863
	I0213 23:05:02.750290   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:05:02.750326   49715 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:05:02.750333   49715 cache.go:56] Caching tarball of preloaded images
	I0213 23:05:02.750421   49715 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:05:02.750463   49715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:05:02.750576   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:05:02.750762   49715 start.go:365] acquiring machines lock for default-k8s-diff-port-083863: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:05:07.158187   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:10.230150   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:16.310133   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:19.382235   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:25.462139   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:28.534229   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:34.614137   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:37.686165   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:43.766206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:46.838168   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:52.918134   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:55.990211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:02.070192   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:05.142167   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:11.222152   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:14.294088   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:20.374194   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:23.446217   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:29.526175   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:32.598147   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:38.678146   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:41.750169   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:47.830142   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:50.902206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:56.982180   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:00.054195   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:06.134182   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:09.206215   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:15.286248   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:18.358211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:24.438162   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:27.510191   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:33.590177   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:36.662174   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:42.742237   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:45.814203   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:48.818472   49120 start.go:369] acquired machines lock for "no-preload-778731" in 4m31.005837415s
	I0213 23:07:48.818529   49120 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:07:48.818538   49120 fix.go:54] fixHost starting: 
	I0213 23:07:48.818916   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:07:48.818948   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:07:48.833483   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 23:07:48.833925   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:07:48.834425   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:07:48.834452   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:07:48.834778   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:07:48.835000   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:07:48.835155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:07:48.836889   49120 fix.go:102] recreateIfNeeded on no-preload-778731: state=Stopped err=<nil>
	I0213 23:07:48.836930   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	W0213 23:07:48.837148   49120 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:07:48.840033   49120 out.go:177] * Restarting existing kvm2 VM for "no-preload-778731" ...
	I0213 23:07:48.816416   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:07:48.816456   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:07:48.818324   49036 machine.go:91] provisioned docker machine in 4m37.408860809s
	I0213 23:07:48.818361   49036 fix.go:56] fixHost completed within 4m37.431023423s
	I0213 23:07:48.818366   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 4m37.431037395s
	W0213 23:07:48.818389   49036 start.go:694] error starting host: provision: host is not running
	W0213 23:07:48.818527   49036 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0213 23:07:48.818541   49036 start.go:709] Will try again in 5 seconds ...
	I0213 23:07:48.841324   49120 main.go:141] libmachine: (no-preload-778731) Calling .Start
	I0213 23:07:48.841532   49120 main.go:141] libmachine: (no-preload-778731) Ensuring networks are active...
	I0213 23:07:48.842327   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network default is active
	I0213 23:07:48.842678   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network mk-no-preload-778731 is active
	I0213 23:07:48.843032   49120 main.go:141] libmachine: (no-preload-778731) Getting domain xml...
	I0213 23:07:48.843852   49120 main.go:141] libmachine: (no-preload-778731) Creating domain...
	I0213 23:07:50.042665   49120 main.go:141] libmachine: (no-preload-778731) Waiting to get IP...
	I0213 23:07:50.043679   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.044091   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.044189   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.044069   50144 retry.go:31] will retry after 251.949505ms: waiting for machine to come up
	I0213 23:07:50.297817   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.298535   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.298567   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.298493   50144 retry.go:31] will retry after 319.494876ms: waiting for machine to come up
	I0213 23:07:50.620050   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.620443   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.620468   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.620395   50144 retry.go:31] will retry after 308.031117ms: waiting for machine to come up
	I0213 23:07:50.929942   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.930361   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.930391   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.930309   50144 retry.go:31] will retry after 513.800078ms: waiting for machine to come up
	I0213 23:07:51.446223   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:51.446875   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:51.446904   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:51.446813   50144 retry.go:31] will retry after 592.80917ms: waiting for machine to come up
	I0213 23:07:52.042126   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.042542   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.042573   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.042519   50144 retry.go:31] will retry after 688.102963ms: waiting for machine to come up
	I0213 23:07:53.818751   49036 start.go:365] acquiring machines lock for old-k8s-version-245122: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:07:52.732194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.732576   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.732602   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.732538   50144 retry.go:31] will retry after 1.143041451s: waiting for machine to come up
	I0213 23:07:53.877287   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:53.877661   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:53.877687   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:53.877624   50144 retry.go:31] will retry after 918.528315ms: waiting for machine to come up
	I0213 23:07:54.797760   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:54.798287   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:54.798314   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:54.798252   50144 retry.go:31] will retry after 1.679404533s: waiting for machine to come up
	I0213 23:07:56.479283   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:56.479853   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:56.479880   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:56.479785   50144 retry.go:31] will retry after 1.510596076s: waiting for machine to come up
	I0213 23:07:57.992757   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:57.993320   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:57.993352   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:57.993274   50144 retry.go:31] will retry after 2.041602638s: waiting for machine to come up
	I0213 23:08:00.036654   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:00.037130   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:00.037162   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:00.037075   50144 retry.go:31] will retry after 3.403460211s: waiting for machine to come up
	I0213 23:08:03.444689   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:03.445152   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:03.445176   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:03.445088   50144 retry.go:31] will retry after 4.270182412s: waiting for machine to come up
	I0213 23:08:09.107106   49443 start.go:369] acquired machines lock for "embed-certs-340656" in 3m54.456203319s
	I0213 23:08:09.107175   49443 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:09.107194   49443 fix.go:54] fixHost starting: 
	I0213 23:08:09.107647   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:09.107696   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:09.124314   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0213 23:08:09.124675   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:09.125131   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:08:09.125153   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:09.125509   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:09.125705   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:09.125898   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:08:09.127641   49443 fix.go:102] recreateIfNeeded on embed-certs-340656: state=Stopped err=<nil>
	I0213 23:08:09.127661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	W0213 23:08:09.127830   49443 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:09.130334   49443 out.go:177] * Restarting existing kvm2 VM for "embed-certs-340656" ...
	I0213 23:08:09.132354   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Start
	I0213 23:08:09.132546   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring networks are active...
	I0213 23:08:09.133391   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network default is active
	I0213 23:08:09.133758   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network mk-embed-certs-340656 is active
	I0213 23:08:09.134160   49443 main.go:141] libmachine: (embed-certs-340656) Getting domain xml...
	I0213 23:08:09.134954   49443 main.go:141] libmachine: (embed-certs-340656) Creating domain...
	I0213 23:08:07.719971   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.720520   49120 main.go:141] libmachine: (no-preload-778731) Found IP for machine: 192.168.83.31
	I0213 23:08:07.720541   49120 main.go:141] libmachine: (no-preload-778731) Reserving static IP address...
	I0213 23:08:07.720559   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has current primary IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.721043   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.721071   49120 main.go:141] libmachine: (no-preload-778731) DBG | skip adding static IP to network mk-no-preload-778731 - found existing host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"}
	I0213 23:08:07.721086   49120 main.go:141] libmachine: (no-preload-778731) Reserved static IP address: 192.168.83.31
	I0213 23:08:07.721105   49120 main.go:141] libmachine: (no-preload-778731) DBG | Getting to WaitForSSH function...
	I0213 23:08:07.721120   49120 main.go:141] libmachine: (no-preload-778731) Waiting for SSH to be available...
	I0213 23:08:07.723769   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724343   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.724370   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724485   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH client type: external
	I0213 23:08:07.724515   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa (-rw-------)
	I0213 23:08:07.724552   49120 main.go:141] libmachine: (no-preload-778731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:07.724577   49120 main.go:141] libmachine: (no-preload-778731) DBG | About to run SSH command:
	I0213 23:08:07.724605   49120 main.go:141] libmachine: (no-preload-778731) DBG | exit 0
	I0213 23:08:07.823050   49120 main.go:141] libmachine: (no-preload-778731) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:07.823504   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetConfigRaw
	I0213 23:08:07.824155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:07.826730   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827237   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.827277   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827608   49120 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/config.json ...
	I0213 23:08:07.827851   49120 machine.go:88] provisioning docker machine ...
	I0213 23:08:07.827877   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:07.828112   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828416   49120 buildroot.go:166] provisioning hostname "no-preload-778731"
	I0213 23:08:07.828464   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828745   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.832015   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832438   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.832477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832698   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.832929   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833125   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833288   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.833480   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.833828   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.833845   49120 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-778731 && echo "no-preload-778731" | sudo tee /etc/hostname
	I0213 23:08:07.979041   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-778731
	
	I0213 23:08:07.979079   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.982378   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982755   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.982783   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982982   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.983137   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983346   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983462   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.983600   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.983946   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.983967   49120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778731/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:08.122610   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:08.122641   49120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:08.122657   49120 buildroot.go:174] setting up certificates
	I0213 23:08:08.122666   49120 provision.go:83] configureAuth start
	I0213 23:08:08.122674   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:08.122935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:08.125641   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126016   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.126046   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126205   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.128441   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128736   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.128780   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128918   49120 provision.go:138] copyHostCerts
	I0213 23:08:08.128984   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:08.128997   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:08.129067   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:08.129198   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:08.129211   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:08.129248   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:08.129321   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:08.129335   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:08.129373   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:08.129443   49120 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.no-preload-778731 san=[192.168.83.31 192.168.83.31 localhost 127.0.0.1 minikube no-preload-778731]
	I0213 23:08:08.326156   49120 provision.go:172] copyRemoteCerts
	I0213 23:08:08.326234   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:08.326263   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.329373   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.329952   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.329986   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.330257   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.330447   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.330599   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.330737   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.423570   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:08.447689   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:08.472766   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:08:08.496594   49120 provision.go:86] duration metric: configureAuth took 373.917105ms
	I0213 23:08:08.496623   49120 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:08.496815   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:08:08.496899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.499464   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499771   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.499801   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.500116   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500284   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500459   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.500651   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.500962   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.500981   49120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:08.828899   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:08.828935   49120 machine.go:91] provisioned docker machine in 1.001067662s
	I0213 23:08:08.828948   49120 start.go:300] post-start starting for "no-preload-778731" (driver="kvm2")
	I0213 23:08:08.828966   49120 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:08.828987   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:08.829378   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:08.829401   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.831985   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832340   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.832365   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832498   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.832717   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.832873   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.833022   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.930192   49120 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:08.934633   49120 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:08.934660   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:08.934723   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:08.934804   49120 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:08.934893   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:08.945400   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:08.973850   49120 start.go:303] post-start completed in 144.888108ms
	I0213 23:08:08.973894   49120 fix.go:56] fixHost completed within 20.155355472s
	I0213 23:08:08.973917   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.976477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976799   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.976831   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976990   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.977177   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977358   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977513   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.977664   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.978069   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.978082   49120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:09.106952   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865689.053803664
	
	I0213 23:08:09.106977   49120 fix.go:206] guest clock: 1707865689.053803664
	I0213 23:08:09.106984   49120 fix.go:219] Guest: 2024-02-13 23:08:09.053803664 +0000 UTC Remote: 2024-02-13 23:08:08.973898202 +0000 UTC m=+291.312557253 (delta=79.905462ms)
	I0213 23:08:09.107004   49120 fix.go:190] guest clock delta is within tolerance: 79.905462ms
	I0213 23:08:09.107011   49120 start.go:83] releasing machines lock for "no-preload-778731", held for 20.288505954s
	I0213 23:08:09.107046   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.107372   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:09.110226   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110592   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.110623   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110795   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111368   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111531   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111622   49120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:09.111662   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.113712   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.114053   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.114096   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.117964   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.118031   49120 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:09.118065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.118167   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.118318   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.118615   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.120610   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121054   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.121088   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121290   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.121461   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.121627   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.121770   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.234065   49120 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:09.240751   49120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:09.393966   49120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:09.401672   49120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:09.401767   49120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:09.426073   49120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:09.426099   49120 start.go:475] detecting cgroup driver to use...
	I0213 23:08:09.426172   49120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:09.446114   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:09.461330   49120 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:09.461404   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:09.475964   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:09.490801   49120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:09.621898   49120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:09.747413   49120 docker.go:233] disabling docker service ...
	I0213 23:08:09.747470   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:09.766642   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:09.783116   49120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:09.910634   49120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:10.052181   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:10.066413   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:10.089436   49120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:10.089505   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.100366   49120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:10.100453   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.111681   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.122231   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.132945   49120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:10.146287   49120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:10.156405   49120 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:10.156481   49120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:10.172152   49120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:10.182862   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:10.315633   49120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:10.509774   49120 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:10.509878   49120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:10.514924   49120 start.go:543] Will wait 60s for crictl version
	I0213 23:08:10.515016   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.518898   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:10.558596   49120 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:10.558695   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.611876   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.664604   49120 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:08:10.665908   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:10.669029   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669393   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:10.669442   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669676   49120 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:10.673975   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:10.686760   49120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:08:10.686830   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:10.730784   49120 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:08:10.730813   49120 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:08:10.730900   49120 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.730903   49120 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.730909   49120 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.730914   49120 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.731026   49120 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.731083   49120 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.731131   49120 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.731497   49120 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732506   49120 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.732511   49120 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.732513   49120 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.732543   49120 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732577   49120 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.732597   49120 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.732719   49120 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.732759   49120 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.880038   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.891830   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0213 23:08:10.905668   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.930079   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.940850   49120 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0213 23:08:10.940894   49120 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.940941   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.942664   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.985299   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.011467   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.040720   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.099497   49120 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0213 23:08:11.099544   49120 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0213 23:08:11.099577   49120 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.099614   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:11.099636   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099651   49120 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0213 23:08:11.099683   49120 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.099711   49120 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0213 23:08:11.099740   49120 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.099746   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099760   49120 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0213 23:08:11.099767   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099782   49120 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.099547   49120 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.099901   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099916   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.107567   49120 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0213 23:08:11.107614   49120 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.107675   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.119038   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.157701   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.157799   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.157722   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.157768   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.157830   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.157919   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0213 23:08:11.158002   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.200990   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0213 23:08:11.201117   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:11.299985   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.300039   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 23:08:11.300041   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300130   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:11.300137   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300148   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0213 23:08:11.300163   49120 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300198   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300209   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300216   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0213 23:08:11.300203   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300098   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300293   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300096   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.318252   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0213 23:08:11.318307   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318355   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318520   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0213 23:08:10.406170   49443 main.go:141] libmachine: (embed-certs-340656) Waiting to get IP...
	I0213 23:08:10.407139   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.407616   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.407692   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.407598   50262 retry.go:31] will retry after 193.299479ms: waiting for machine to come up
	I0213 23:08:10.603143   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.603673   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.603696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.603627   50262 retry.go:31] will retry after 369.099644ms: waiting for machine to come up
	I0213 23:08:10.974421   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.974922   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.974953   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.974870   50262 retry.go:31] will retry after 418.956642ms: waiting for machine to come up
	I0213 23:08:11.395489   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:11.395974   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:11.396005   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:11.395937   50262 retry.go:31] will retry after 610.320769ms: waiting for machine to come up
	I0213 23:08:12.007695   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.008167   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.008198   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.008115   50262 retry.go:31] will retry after 624.461953ms: waiting for machine to come up
	I0213 23:08:12.634088   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.634519   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.634552   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.634467   50262 retry.go:31] will retry after 903.217503ms: waiting for machine to come up
	I0213 23:08:13.539114   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:13.539683   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:13.539725   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:13.539611   50262 retry.go:31] will retry after 747.647967ms: waiting for machine to come up
	I0213 23:08:14.288632   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:14.289301   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:14.289338   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:14.289236   50262 retry.go:31] will retry after 1.415080779s: waiting for machine to come up
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.810648669s)
	I0213 23:08:15.110937   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.810587707s)
	I0213 23:08:15.110961   49120 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:15.110969   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0213 23:08:15.111009   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:17.178104   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067071549s)
	I0213 23:08:17.178130   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0213 23:08:17.178156   49120 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:17.178204   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:15.706329   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:15.706863   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:15.706901   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:15.706769   50262 retry.go:31] will retry after 1.500671136s: waiting for machine to come up
	I0213 23:08:17.209706   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:17.210252   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:17.210278   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:17.210198   50262 retry.go:31] will retry after 1.743342291s: waiting for machine to come up
	I0213 23:08:18.956397   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:18.956934   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:18.956971   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:18.956874   50262 retry.go:31] will retry after 2.095777111s: waiting for machine to come up
	I0213 23:08:18.227625   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.049388261s)
	I0213 23:08:18.227663   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 23:08:18.227691   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:18.227756   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:21.120783   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.892997016s)
	I0213 23:08:21.120823   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0213 23:08:21.120854   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.120908   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.055630   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:21.056028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:21.056106   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:21.056004   50262 retry.go:31] will retry after 3.144708692s: waiting for machine to come up
	I0213 23:08:24.202158   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:24.202562   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:24.202584   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:24.202515   50262 retry.go:31] will retry after 3.072407019s: waiting for machine to come up
	I0213 23:08:23.793772   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.672817599s)
	I0213 23:08:23.793813   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0213 23:08:23.793841   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:23.793916   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:25.866352   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.072399119s)
	I0213 23:08:25.866388   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0213 23:08:25.866422   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:25.866469   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:27.315469   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.44897051s)
	I0213 23:08:27.315502   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0213 23:08:27.315534   49120 cache_images.go:123] Successfully loaded all cached images
	I0213 23:08:27.315540   49120 cache_images.go:92] LoadImages completed in 16.584715329s
	I0213 23:08:27.315650   49120 ssh_runner.go:195] Run: crio config
	I0213 23:08:27.383180   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:27.383203   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:27.383224   49120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:27.383249   49120 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.31 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778731 NodeName:no-preload-778731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:27.383445   49120 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778731"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:27.383545   49120 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-778731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:27.383606   49120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:08:27.393312   49120 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:27.393384   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:27.401513   49120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0213 23:08:27.419705   49120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:08:27.439236   49120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0213 23:08:27.457026   49120 ssh_runner.go:195] Run: grep 192.168.83.31	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:27.461679   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:27.474701   49120 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731 for IP: 192.168.83.31
	I0213 23:08:27.474740   49120 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:27.474922   49120 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:27.474966   49120 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:27.475042   49120 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.key
	I0213 23:08:27.475102   49120 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key.049c2370
	I0213 23:08:27.475138   49120 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key
	I0213 23:08:27.475241   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:27.475271   49120 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:27.475281   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:27.475305   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:27.475326   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:27.475360   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:27.475401   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:27.475997   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:27.500212   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:27.526078   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:27.552892   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:27.579169   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:27.603962   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:27.628862   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:27.653046   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:27.681039   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:27.708026   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:28.658782   49715 start.go:369] acquired machines lock for "default-k8s-diff-port-083863" in 3m25.907988779s
	I0213 23:08:28.658844   49715 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:28.658851   49715 fix.go:54] fixHost starting: 
	I0213 23:08:28.659235   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:28.659276   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:28.677314   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0213 23:08:28.677718   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:28.678315   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:08:28.678355   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:28.678727   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:28.678935   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:28.679109   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:08:28.680868   49715 fix.go:102] recreateIfNeeded on default-k8s-diff-port-083863: state=Stopped err=<nil>
	I0213 23:08:28.680915   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	W0213 23:08:28.681100   49715 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:28.683083   49715 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-083863" ...
	I0213 23:08:27.278610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279033   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has current primary IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279068   49443 main.go:141] libmachine: (embed-certs-340656) Found IP for machine: 192.168.61.56
	I0213 23:08:27.279085   49443 main.go:141] libmachine: (embed-certs-340656) Reserving static IP address...
	I0213 23:08:27.279524   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.279553   49443 main.go:141] libmachine: (embed-certs-340656) Reserved static IP address: 192.168.61.56
	I0213 23:08:27.279572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | skip adding static IP to network mk-embed-certs-340656 - found existing host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"}
	I0213 23:08:27.279592   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Getting to WaitForSSH function...
	I0213 23:08:27.279609   49443 main.go:141] libmachine: (embed-certs-340656) Waiting for SSH to be available...
	I0213 23:08:27.282041   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282383   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.282417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282517   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH client type: external
	I0213 23:08:27.282548   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa (-rw-------)
	I0213 23:08:27.282582   49443 main.go:141] libmachine: (embed-certs-340656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:27.282598   49443 main.go:141] libmachine: (embed-certs-340656) DBG | About to run SSH command:
	I0213 23:08:27.282688   49443 main.go:141] libmachine: (embed-certs-340656) DBG | exit 0
	I0213 23:08:27.374230   49443 main.go:141] libmachine: (embed-certs-340656) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:27.374589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetConfigRaw
	I0213 23:08:27.375330   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.378273   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378648   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.378682   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378917   49443 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/config.json ...
	I0213 23:08:27.379092   49443 machine.go:88] provisioning docker machine ...
	I0213 23:08:27.379109   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:27.379298   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379491   49443 buildroot.go:166] provisioning hostname "embed-certs-340656"
	I0213 23:08:27.379521   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379667   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.382028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382351   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.382404   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382562   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.382728   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.382880   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.383023   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.383213   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.383662   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.383682   49443 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-340656 && echo "embed-certs-340656" | sudo tee /etc/hostname
	I0213 23:08:27.526044   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-340656
	
	I0213 23:08:27.526075   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.529185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529526   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.529556   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529660   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.529852   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530056   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530203   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.530356   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.530695   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.530725   49443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-340656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-340656/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-340656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:27.664926   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:27.664966   49443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:27.664993   49443 buildroot.go:174] setting up certificates
	I0213 23:08:27.665004   49443 provision.go:83] configureAuth start
	I0213 23:08:27.665019   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.665429   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.668520   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.668912   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.668937   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.669172   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.671996   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672365   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.672411   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672620   49443 provision.go:138] copyHostCerts
	I0213 23:08:27.672684   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:27.672706   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:27.672778   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:27.672914   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:27.672929   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:27.672966   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:27.673049   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:27.673060   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:27.673089   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:27.673187   49443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.embed-certs-340656 san=[192.168.61.56 192.168.61.56 localhost 127.0.0.1 minikube embed-certs-340656]
	I0213 23:08:27.924954   49443 provision.go:172] copyRemoteCerts
	I0213 23:08:27.925011   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:27.925033   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.928037   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928376   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.928410   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928588   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.928779   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.928960   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.929085   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.019335   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:28.043949   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0213 23:08:28.066824   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:08:28.089010   49443 provision.go:86] duration metric: configureAuth took 423.986916ms
	I0213 23:08:28.089043   49443 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:28.089251   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:28.089316   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.091655   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.091955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.091984   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.092151   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.092310   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092440   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092553   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.092694   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.092999   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.093014   49443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:28.402931   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:28.402953   49443 machine.go:91] provisioned docker machine in 1.023849221s
	I0213 23:08:28.402962   49443 start.go:300] post-start starting for "embed-certs-340656" (driver="kvm2")
	I0213 23:08:28.402972   49443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:28.402986   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.403246   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:28.403266   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.405815   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.406201   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406331   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.406514   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.406703   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.406867   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.500638   49443 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:28.504820   49443 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:28.504839   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:28.504899   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:28.504967   49443 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:28.505051   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:28.514593   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:28.536607   49443 start.go:303] post-start completed in 133.632311ms
	I0213 23:08:28.536653   49443 fix.go:56] fixHost completed within 19.429451259s
	I0213 23:08:28.536673   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.539355   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539715   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.539739   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539914   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.540115   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540275   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540420   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.540581   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.540917   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.540932   49443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:28.658649   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865708.631208852
	
	I0213 23:08:28.658674   49443 fix.go:206] guest clock: 1707865708.631208852
	I0213 23:08:28.658682   49443 fix.go:219] Guest: 2024-02-13 23:08:28.631208852 +0000 UTC Remote: 2024-02-13 23:08:28.536657964 +0000 UTC m=+254.042699377 (delta=94.550888ms)
	I0213 23:08:28.658701   49443 fix.go:190] guest clock delta is within tolerance: 94.550888ms
	I0213 23:08:28.658707   49443 start.go:83] releasing machines lock for "embed-certs-340656", held for 19.551560323s
	I0213 23:08:28.658730   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.658982   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:28.662069   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662449   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.662480   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662651   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663245   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663430   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663521   49443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:28.663567   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.663688   49443 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:28.663712   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.666417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666867   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.666900   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667039   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.667185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667234   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667418   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667467   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667518   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.667589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667736   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.782794   49443 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:28.788743   49443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:28.933478   49443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:28.940543   49443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:28.940632   49443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:28.958972   49443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:28.958994   49443 start.go:475] detecting cgroup driver to use...
	I0213 23:08:28.959084   49443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:28.977833   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:28.996142   49443 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:28.996205   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:29.015509   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:29.029839   49443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:29.140405   49443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:29.265524   49443 docker.go:233] disabling docker service ...
	I0213 23:08:29.265597   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:29.283479   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:29.300116   49443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:29.428731   49443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:29.555072   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:29.569803   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:29.589259   49443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:29.589329   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.600653   49443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:29.600732   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.612313   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.624637   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.636279   49443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:29.648496   49443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:29.658957   49443 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:29.659020   49443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:29.673605   49443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:29.684589   49443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:29.800899   49443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:29.989345   49443 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:29.989423   49443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:29.995420   49443 start.go:543] Will wait 60s for crictl version
	I0213 23:08:29.995489   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:08:30.000012   49443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:30.047026   49443 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:30.047114   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.095456   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.146027   49443 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:28.684576   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Start
	I0213 23:08:28.684757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring networks are active...
	I0213 23:08:28.685582   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network default is active
	I0213 23:08:28.685942   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network mk-default-k8s-diff-port-083863 is active
	I0213 23:08:28.686429   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Getting domain xml...
	I0213 23:08:28.687208   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Creating domain...
	I0213 23:08:30.003148   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting to get IP...
	I0213 23:08:30.004175   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004634   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004725   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.004599   50394 retry.go:31] will retry after 210.109414ms: waiting for machine to come up
	I0213 23:08:30.215983   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216407   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216439   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.216359   50394 retry.go:31] will retry after 367.743906ms: waiting for machine to come up
	I0213 23:08:30.586081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586629   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586663   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.586583   50394 retry.go:31] will retry after 342.736609ms: waiting for machine to come up
	I0213 23:08:30.931248   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931707   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931738   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.931656   50394 retry.go:31] will retry after 597.326691ms: waiting for machine to come up
	I0213 23:08:31.530395   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530818   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530848   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:31.530767   50394 retry.go:31] will retry after 749.518323ms: waiting for machine to come up
	I0213 23:08:32.281688   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282102   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282138   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:32.282052   50394 retry.go:31] will retry after 760.722423ms: waiting for machine to come up
	I0213 23:08:27.731687   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:27.755515   49120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:27.774677   49120 ssh_runner.go:195] Run: openssl version
	I0213 23:08:27.780042   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:27.789684   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794384   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794443   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.800052   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:27.809570   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:27.818781   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823148   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823241   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.829043   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:27.839290   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:27.849614   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854661   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854720   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.860365   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:27.870548   49120 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:27.874967   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:27.880745   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:27.886409   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:27.892063   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:27.897857   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:27.903804   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:27.909720   49120 kubeadm.go:404] StartCluster: {Name:no-preload-778731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:27.909833   49120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:27.909924   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:27.951061   49120 cri.go:89] found id: ""
	I0213 23:08:27.951158   49120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:27.961916   49120 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:27.961941   49120 kubeadm.go:636] restartCluster start
	I0213 23:08:27.961993   49120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:27.971549   49120 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:27.972633   49120 kubeconfig.go:92] found "no-preload-778731" server: "https://192.168.83.31:8443"
	I0213 23:08:27.975092   49120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:27.983592   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:27.983650   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:27.993448   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.483988   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.484086   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.499804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.984581   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.984671   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.995887   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.484572   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.484680   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.496906   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.984503   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.984569   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.997813   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.484312   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.484391   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.501606   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.984144   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.984237   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.999418   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.483900   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.483977   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.498536   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.983688   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.983783   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.998804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:32.484556   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.484662   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:32.499238   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.147474   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:30.150438   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.150826   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:30.150857   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.151054   49443 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:30.155517   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:30.168463   49443 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:30.168543   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:30.210212   49443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:30.210296   49443 ssh_runner.go:195] Run: which lz4
	I0213 23:08:30.214665   49443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:30.219355   49443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:30.219383   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:32.244671   49443 crio.go:444] Took 2.030037 seconds to copy over tarball
	I0213 23:08:32.244757   49443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:33.043974   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044478   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:33.044417   50394 retry.go:31] will retry after 1.030870704s: waiting for machine to come up
	I0213 23:08:34.077209   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077662   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077692   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:34.077625   50394 retry.go:31] will retry after 1.450536952s: waiting for machine to come up
	I0213 23:08:35.529659   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530101   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530135   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:35.530042   50394 retry.go:31] will retry after 1.82898716s: waiting for machine to come up
	I0213 23:08:37.360889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361314   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361343   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:37.361270   50394 retry.go:31] will retry after 1.83473409s: waiting for machine to come up
	I0213 23:08:32.984096   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.984203   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.001189   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.483705   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.499694   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.983927   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.984057   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.999205   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.483708   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.483798   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.498840   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.984372   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.984461   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.999079   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.483661   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.497573   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.983985   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.984088   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.995899   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.484546   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.484660   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.496286   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.983902   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.984113   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.995778   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.484405   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.484518   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.495219   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.549721   49443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.304931423s)
	I0213 23:08:35.549748   49443 crio.go:451] Took 3.305051 seconds to extract the tarball
	I0213 23:08:35.549778   49443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:35.590195   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:35.640735   49443 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:35.640768   49443 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:35.640850   49443 ssh_runner.go:195] Run: crio config
	I0213 23:08:35.707018   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:35.707046   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:35.707072   49443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:35.707117   49443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.56 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-340656 NodeName:embed-certs-340656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:35.707294   49443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-340656"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:35.707405   49443 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-340656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:35.707483   49443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:35.717170   49443 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:35.717251   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:35.726586   49443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0213 23:08:35.744139   49443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:35.761480   49443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0213 23:08:35.779911   49443 ssh_runner.go:195] Run: grep 192.168.61.56	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:35.784152   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:35.799376   49443 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656 for IP: 192.168.61.56
	I0213 23:08:35.799417   49443 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:35.799601   49443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:35.799657   49443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:35.799766   49443 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/client.key
	I0213 23:08:35.799859   49443 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key.aef5f426
	I0213 23:08:35.799913   49443 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key
	I0213 23:08:35.800053   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:35.800091   49443 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:35.800107   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:35.800143   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:35.800180   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:35.800215   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:35.800276   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:35.801130   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:35.829983   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:35.856832   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:35.883713   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:35.910759   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:35.937208   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:35.963904   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:35.991562   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:36.022900   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:36.049084   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:36.074152   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:36.098863   49443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:36.115588   49443 ssh_runner.go:195] Run: openssl version
	I0213 23:08:36.120864   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:36.130552   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.134999   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.135068   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.140621   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:36.150963   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:36.160917   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165428   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165472   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.171493   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:36.181635   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:36.191753   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196368   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196444   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.201985   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:36.211839   49443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:36.216608   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:36.222594   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:36.228585   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:36.234646   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:36.240579   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:36.246642   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:36.252961   49443 kubeadm.go:404] StartCluster: {Name:embed-certs-340656 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:36.253087   49443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:36.253149   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:36.297601   49443 cri.go:89] found id: ""
	I0213 23:08:36.297705   49443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:36.308068   49443 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:36.308094   49443 kubeadm.go:636] restartCluster start
	I0213 23:08:36.308152   49443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:36.318071   49443 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.319274   49443 kubeconfig.go:92] found "embed-certs-340656" server: "https://192.168.61.56:8443"
	I0213 23:08:36.321573   49443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:36.331006   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.331059   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.342313   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.831994   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.832106   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.845071   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.331654   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.331724   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.344311   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.831903   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.831999   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.843671   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.331225   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.331337   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.349021   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.831196   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.831292   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.847050   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.332089   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.332162   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.348108   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.198188   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198570   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198596   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:39.198528   50394 retry.go:31] will retry after 2.722095348s: waiting for machine to come up
	I0213 23:08:41.923545   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923954   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923985   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:41.923904   50394 retry.go:31] will retry after 2.239772531s: waiting for machine to come up
	I0213 23:08:37.984640   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.984743   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.999300   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.999332   49120 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:37.999340   49120 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:37.999349   49120 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:37.999402   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:38.046199   49120 cri.go:89] found id: ""
	I0213 23:08:38.046287   49120 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:38.061697   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:38.071295   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:38.071378   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080401   49120 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:38.209853   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.403696   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193792627s)
	I0213 23:08:39.403733   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.602387   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.703317   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.783257   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:39.783347   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.284357   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.784437   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.284302   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.783582   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.284435   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.312653   49120 api_server.go:72] duration metric: took 2.529396171s to wait for apiserver process to appear ...
	I0213 23:08:42.312698   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:42.312719   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:42.313220   49120 api_server.go:269] stopped: https://192.168.83.31:8443/healthz: Get "https://192.168.83.31:8443/healthz": dial tcp 192.168.83.31:8443: connect: connection refused
	I0213 23:08:39.832020   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.832156   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.848229   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.331855   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.331992   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.347635   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.831070   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.831185   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.847184   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.331346   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.331444   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.346518   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.831081   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.831160   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.846752   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.331298   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.331389   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.348782   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.831278   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.831373   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.846241   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.331807   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.331876   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.346998   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.831697   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.831792   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.843733   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.331647   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.331762   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.343476   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.165021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165387   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:44.165357   50394 retry.go:31] will retry after 2.886798605s: waiting for machine to come up
	I0213 23:08:47.055186   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055880   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Found IP for machine: 192.168.39.3
	I0213 23:08:47.055923   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has current primary IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserving static IP address...
	I0213 23:08:47.056480   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.056512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserved static IP address: 192.168.39.3
	I0213 23:08:47.056537   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | skip adding static IP to network mk-default-k8s-diff-port-083863 - found existing host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"}
	I0213 23:08:47.056552   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Getting to WaitForSSH function...
	I0213 23:08:47.056567   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for SSH to be available...
	I0213 23:08:47.059414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059844   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.059882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059991   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH client type: external
	I0213 23:08:47.060025   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa (-rw-------)
	I0213 23:08:47.060061   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:47.060077   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | About to run SSH command:
	I0213 23:08:47.060093   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | exit 0
	I0213 23:08:47.154417   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:47.154807   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetConfigRaw
	I0213 23:08:47.155614   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.158506   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.158979   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.159005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.159297   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:08:47.159557   49715 machine.go:88] provisioning docker machine ...
	I0213 23:08:47.159577   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:47.159833   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160012   49715 buildroot.go:166] provisioning hostname "default-k8s-diff-port-083863"
	I0213 23:08:47.160038   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160240   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.163021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163444   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.163476   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163705   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.163908   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164070   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164234   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.164391   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.164762   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.164777   49715 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-083863 && echo "default-k8s-diff-port-083863" | sudo tee /etc/hostname
	I0213 23:08:47.304583   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-083863
	
	I0213 23:08:47.304617   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.307729   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308160   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.308196   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308345   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.308541   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308713   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308921   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.309148   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.309520   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.309539   49715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-083863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-083863/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-083863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:47.442924   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:47.442958   49715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:47.442989   49715 buildroot.go:174] setting up certificates
	I0213 23:08:47.443006   49715 provision.go:83] configureAuth start
	I0213 23:08:47.443024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.443287   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.446220   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446611   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.446646   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446821   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.449591   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.449920   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.449989   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.450162   49715 provision.go:138] copyHostCerts
	I0213 23:08:47.450221   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:47.450241   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:47.450305   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:47.450482   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:47.450497   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:47.450532   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:47.450614   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:47.450625   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:47.450651   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:47.450720   49715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-083863 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube default-k8s-diff-port-083863]
	I0213 23:08:47.522550   49715 provision.go:172] copyRemoteCerts
	I0213 23:08:47.522618   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:47.522647   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.525731   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526189   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.526230   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526410   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.526610   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.526814   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.526971   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:47.626666   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:42.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.095528   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:46.095564   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:46.095581   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.178470   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.178500   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.313729   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.318658   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.318686   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.813274   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.819766   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.819808   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.313432   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.325228   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:47.325274   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.819686   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:08:47.829842   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:08:47.829896   49120 api_server.go:131] duration metric: took 5.517189469s to wait for apiserver health ...
	I0213 23:08:47.829907   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:47.829915   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:47.831685   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:48.354933   49036 start.go:369] acquired machines lock for "old-k8s-version-245122" in 54.536117689s
	I0213 23:08:48.354988   49036 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:48.354996   49036 fix.go:54] fixHost starting: 
	I0213 23:08:48.355410   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:48.355447   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:48.375953   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0213 23:08:48.376414   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:48.376997   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:08:48.377034   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:48.377373   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:48.377578   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:08:48.377709   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:08:48.379630   49036 fix.go:102] recreateIfNeeded on old-k8s-version-245122: state=Stopped err=<nil>
	I0213 23:08:48.379660   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	W0213 23:08:48.379822   49036 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:48.381473   49036 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-245122" ...
	I0213 23:08:44.831390   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.831503   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.845068   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.331710   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.331800   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.343755   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.831306   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.831415   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.844972   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.331510   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:46.331596   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:46.343475   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.343509   49443 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:46.343520   49443 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:46.343532   49443 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:46.343595   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:46.388343   49443 cri.go:89] found id: ""
	I0213 23:08:46.388417   49443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:46.403792   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:46.413139   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:46.413197   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422541   49443 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422566   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:46.551204   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.427625   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.656205   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.776652   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.860844   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:47.860942   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.362058   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.861851   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:49.361973   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:47.655867   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 23:08:47.687226   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:47.719579   49715 provision.go:86] duration metric: configureAuth took 276.554247ms
	I0213 23:08:47.719610   49715 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:47.719857   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:47.719945   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.723023   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723353   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.723386   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723686   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.723889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724074   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724299   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.724469   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.724860   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.724878   49715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:48.093490   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:48.093519   49715 machine.go:91] provisioned docker machine in 933.948787ms
	I0213 23:08:48.093529   49715 start.go:300] post-start starting for "default-k8s-diff-port-083863" (driver="kvm2")
	I0213 23:08:48.093540   49715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:48.093553   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.093887   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:48.093922   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.096941   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097351   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.097385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097701   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.097936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.098145   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.098367   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.188626   49715 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:48.193282   49715 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:48.193320   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:48.193406   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:48.193500   49715 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:48.193597   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:48.202782   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:48.235000   49715 start.go:303] post-start completed in 141.454861ms
	I0213 23:08:48.235032   49715 fix.go:56] fixHost completed within 19.576181803s
	I0213 23:08:48.235051   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.238450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.238992   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.239024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.239320   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.239535   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239683   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239846   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.240085   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:48.240390   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:48.240401   49715 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:48.354769   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865728.300012904
	
	I0213 23:08:48.354799   49715 fix.go:206] guest clock: 1707865728.300012904
	I0213 23:08:48.354811   49715 fix.go:219] Guest: 2024-02-13 23:08:48.300012904 +0000 UTC Remote: 2024-02-13 23:08:48.235035663 +0000 UTC m=+225.644270499 (delta=64.977241ms)
	I0213 23:08:48.354837   49715 fix.go:190] guest clock delta is within tolerance: 64.977241ms
	I0213 23:08:48.354845   49715 start.go:83] releasing machines lock for "default-k8s-diff-port-083863", held for 19.696026805s
	I0213 23:08:48.354884   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.355246   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:48.358586   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359040   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.359081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359323   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.359961   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360127   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360200   49715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:48.360233   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.360372   49715 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:48.360398   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.363529   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.363715   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364166   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364357   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364394   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364461   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364656   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.364824   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370192   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.370221   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.370404   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370677   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.457230   49715 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:48.484954   49715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:48.636752   49715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:48.644369   49715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:48.644452   49715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:48.667562   49715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:48.667594   49715 start.go:475] detecting cgroup driver to use...
	I0213 23:08:48.667684   49715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:48.689737   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:48.708806   49715 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:48.708876   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:48.728530   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:48.746819   49715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:48.877519   49715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:49.069574   49715 docker.go:233] disabling docker service ...
	I0213 23:08:49.069661   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:49.103853   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:49.122356   49715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:49.272225   49715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:49.412111   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:49.428799   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:49.449679   49715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:49.449734   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.465458   49715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:49.465523   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.480399   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.494161   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.507964   49715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:49.522486   49715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:49.534468   49715 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:49.534538   49715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:49.554260   49715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:49.566868   49715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:49.725125   49715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:49.963096   49715 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:49.963172   49715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:49.970420   49715 start.go:543] Will wait 60s for crictl version
	I0213 23:08:49.970508   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:08:49.976177   49715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:50.024316   49715 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:50.024407   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.080031   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.133918   49715 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:48.382835   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Start
	I0213 23:08:48.383129   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring networks are active...
	I0213 23:08:48.384069   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network default is active
	I0213 23:08:48.384458   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network mk-old-k8s-version-245122 is active
	I0213 23:08:48.385051   49036 main.go:141] libmachine: (old-k8s-version-245122) Getting domain xml...
	I0213 23:08:48.387192   49036 main.go:141] libmachine: (old-k8s-version-245122) Creating domain...
	I0213 23:08:49.933195   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting to get IP...
	I0213 23:08:49.934463   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:49.935084   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:49.935109   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:49.934961   50565 retry.go:31] will retry after 206.578168ms: waiting for machine to come up
	I0213 23:08:50.143704   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.144239   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.144263   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.144177   50565 retry.go:31] will retry after 378.113433ms: waiting for machine to come up
	I0213 23:08:50.524043   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.524670   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.524703   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.524629   50565 retry.go:31] will retry after 468.261692ms: waiting for machine to come up
	I0213 23:08:50.995002   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.995616   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.995645   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.995524   50565 retry.go:31] will retry after 437.792222ms: waiting for machine to come up
	I0213 23:08:50.135427   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:50.139087   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139523   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:50.139556   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139840   49715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:50.145191   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:50.159814   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:50.159873   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:50.208873   49715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:50.208947   49715 ssh_runner.go:195] Run: which lz4
	I0213 23:08:50.214254   49715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:50.219979   49715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:50.220013   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:47.833116   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:47.862550   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:47.895377   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:47.919843   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:47.919894   49120 system_pods.go:61] "coredns-76f75df574-hgzcn" [a384c748-9d5b-4d07-b03c-5a65b3d7a450] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:47.919907   49120 system_pods.go:61] "etcd-no-preload-778731" [44169811-10f1-4d3e-8eaa-b525dd0f722f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:47.919920   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [126febb5-8d0b-4162-b320-7fd718b4a974] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:47.919929   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [a7be9641-1bd0-41f9-853a-73b522c60746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:47.919945   49120 system_pods.go:61] "kube-proxy-msxf7" [81201ce9-6f3d-457c-b582-eb8a17dbf4eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:47.919968   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [72f487c5-c42e-4e42-85c8-3b3df6bccd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:47.919984   49120 system_pods.go:61] "metrics-server-57f55c9bc5-r44rm" [ae0751b9-57fe-4d99-b41c-5c685b846e1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:47.919996   49120 system_pods.go:61] "storage-provisioner" [e1d157b3-7ce1-488c-a3ea-ab0e8da83fb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:47.920009   49120 system_pods.go:74] duration metric: took 24.606913ms to wait for pod list to return data ...
	I0213 23:08:47.920031   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:47.930765   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:47.930810   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:47.930827   49120 node_conditions.go:105] duration metric: took 10.783663ms to run NodePressure ...
	I0213 23:08:47.930848   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:48.401055   49120 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407167   49120 kubeadm.go:787] kubelet initialised
	I0213 23:08:48.407238   49120 kubeadm.go:788] duration metric: took 6.148946ms waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407260   49120 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:48.414170   49120 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:50.427883   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:52.431208   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:49.861114   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.361308   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.861249   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.894694   49443 api_server.go:72] duration metric: took 3.033850926s to wait for apiserver process to appear ...
	I0213 23:08:50.894724   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:50.894746   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:50.895231   49443 api_server.go:269] stopped: https://192.168.61.56:8443/healthz: Get "https://192.168.61.56:8443/healthz": dial tcp 192.168.61.56:8443: connect: connection refused
	I0213 23:08:51.394882   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:51.435131   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:51.435705   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:51.435733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:51.435616   50565 retry.go:31] will retry after 631.237829ms: waiting for machine to come up
	I0213 23:08:52.069120   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.069697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.069719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.069617   50565 retry.go:31] will retry after 756.691364ms: waiting for machine to come up
	I0213 23:08:52.828166   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.828631   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.828662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.828562   50565 retry.go:31] will retry after 761.909065ms: waiting for machine to come up
	I0213 23:08:53.592196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:53.592753   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:53.592779   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:53.592685   50565 retry.go:31] will retry after 1.153412106s: waiting for machine to come up
	I0213 23:08:54.747606   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:54.748184   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:54.748221   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:54.748113   50565 retry.go:31] will retry after 1.198347182s: waiting for machine to come up
	I0213 23:08:55.947978   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:55.948524   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:55.948545   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:55.948469   50565 retry.go:31] will retry after 2.116247229s: waiting for machine to come up
	I0213 23:08:52.713946   49715 crio.go:444] Took 2.499735 seconds to copy over tarball
	I0213 23:08:52.714030   49715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:56.483125   49715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.769061262s)
	I0213 23:08:56.483156   49715 crio.go:451] Took 3.769175 seconds to extract the tarball
	I0213 23:08:56.483167   49715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:56.524290   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:56.576319   49715 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:56.576349   49715 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:56.576435   49715 ssh_runner.go:195] Run: crio config
	I0213 23:08:56.633481   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:08:56.633514   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:56.633537   49715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:56.633561   49715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-083863 NodeName:default-k8s-diff-port-083863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:56.633744   49715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-083863"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:56.633838   49715 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-083863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0213 23:08:56.633930   49715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:56.643018   49715 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:56.643110   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:56.652116   49715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0213 23:08:56.670140   49715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:56.687456   49715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0213 23:08:56.707317   49715 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:56.711339   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:56.726090   49715 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863 for IP: 192.168.39.3
	I0213 23:08:56.726139   49715 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:56.726320   49715 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:56.726381   49715 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:56.726486   49715 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.key
	I0213 23:08:56.755690   49715 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key.599d509e
	I0213 23:08:56.755797   49715 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key
	I0213 23:08:56.755953   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:56.755996   49715 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:56.756008   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:56.756042   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:56.756072   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:56.756104   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:56.756157   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:56.756999   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:56.790072   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:56.821182   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:56.849753   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:56.875241   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:56.901057   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:56.929989   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:56.959488   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:56.991678   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:57.019756   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:57.047743   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:57.078812   49715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:57.097081   49715 ssh_runner.go:195] Run: openssl version
	I0213 23:08:57.103754   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:57.117364   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124069   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124160   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.132252   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:57.145398   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:57.158348   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164091   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164158   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.171693   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:57.185004   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:57.198410   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204432   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204495   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.210331   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:57.221567   49715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:57.226357   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:57.232307   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:57.239034   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:57.245485   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:57.252782   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:57.259406   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:57.265644   49715 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:57.265744   49715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:57.265820   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:57.313129   49715 cri.go:89] found id: ""
	I0213 23:08:57.313210   49715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:57.323716   49715 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:57.323747   49715 kubeadm.go:636] restartCluster start
	I0213 23:08:57.323837   49715 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:57.333805   49715 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.335100   49715 kubeconfig.go:92] found "default-k8s-diff-port-083863" server: "https://192.168.39.3:8444"
	I0213 23:08:57.337669   49715 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:57.347371   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.347434   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.359168   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:53.424206   49120 pod_ready.go:92] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:53.424235   49120 pod_ready.go:81] duration metric: took 5.01002772s waiting for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:53.424249   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:55.432858   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:54.636558   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.636595   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.636612   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.714679   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.714727   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.894910   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.909668   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:54.909716   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.395328   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.401124   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.401155   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.895827   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.901814   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.901848   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.395611   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.402367   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.402404   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.894889   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.900228   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.900267   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.394804   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.404774   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.404811   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.895090   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.902470   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.902527   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:58.395650   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:58.404727   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:08:58.413383   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:08:58.413425   49443 api_server.go:131] duration metric: took 7.518687282s to wait for apiserver health ...
	I0213 23:08:58.413437   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:58.413444   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:58.415682   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:58.417320   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:58.436763   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:58.468658   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:58.482719   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:58.482755   49443 system_pods.go:61] "coredns-5dd5756b68-h86p6" [9d274749-fe12-43c1-b30c-70586c04daf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:58.482762   49443 system_pods.go:61] "etcd-embed-certs-340656" [1fbdd834-b8c1-48c9-aab7-3c72d7012eca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:58.482770   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [3bb1cfb1-8fea-4b7a-a459-a709010ee6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:58.482783   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [f8035337-1819-4b0b-83eb-1992445c0185] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:58.482790   49443 system_pods.go:61] "kube-proxy-swxwt" [2bbc949c-f478-4c01-9e81-884a05a9a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:58.482795   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [923ef614-eef1-4e32-ae83-2e540841060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:58.482831   49443 system_pods.go:61] "metrics-server-57f55c9bc5-lmcwv" [a948cc5d-01b6-4298-a7c7-24d9704497d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:58.482846   49443 system_pods.go:61] "storage-provisioner" [9fc17bde-ff30-4ed7-829c-3d59badd55f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:58.482854   49443 system_pods.go:74] duration metric: took 14.17202ms to wait for pod list to return data ...
	I0213 23:08:58.482865   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:58.487666   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:58.487710   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:58.487723   49443 node_conditions.go:105] duration metric: took 4.851634ms to run NodePressure ...
	I0213 23:08:58.487743   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:59.044504   49443 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088347   49443 kubeadm.go:787] kubelet initialised
	I0213 23:08:59.088379   49443 kubeadm.go:788] duration metric: took 43.842389ms waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088390   49443 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:59.105292   49443 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.067162   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:58.067629   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:58.067662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:58.067589   50565 retry.go:31] will retry after 2.740013841s: waiting for machine to come up
	I0213 23:09:00.811129   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:00.811590   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:00.811623   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:00.811537   50565 retry.go:31] will retry after 3.449503247s: waiting for machine to come up
	I0213 23:08:57.848036   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.848128   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.863924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.348357   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.348539   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.364081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.848249   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.848321   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.860671   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.348282   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.348385   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.364226   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.847737   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.847838   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.864832   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.348231   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.348311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.360532   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.848115   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.848220   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.861558   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.348101   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.348192   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.360173   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.847696   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.847788   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.859631   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:02.348255   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.348353   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.363081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.943272   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:58.432531   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:58.432613   49120 pod_ready.go:81] duration metric: took 5.008354336s waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.432631   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:00.441099   49120 pod_ready.go:102] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:01.440207   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.440235   49120 pod_ready.go:81] duration metric: took 3.0075951s waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.440249   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446456   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.446483   49120 pod_ready.go:81] duration metric: took 6.224957ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446495   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452476   49120 pod_ready.go:92] pod "kube-proxy-msxf7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.452509   49120 pod_ready.go:81] duration metric: took 6.006176ms waiting for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452520   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457619   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.457640   49120 pod_ready.go:81] duration metric: took 5.112826ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457648   49120 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.113738   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:03.114003   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.262520   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:04.262989   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:04.263018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:04.262939   50565 retry.go:31] will retry after 3.540479459s: waiting for machine to come up
	I0213 23:09:02.847964   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.848073   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.863100   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.347510   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.347608   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.362561   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.847536   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.847635   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.863357   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.347939   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.348026   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.363027   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.847491   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.847576   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.858924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.347449   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.347527   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.359307   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.847845   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.847934   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.859530   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.348136   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.348231   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.360149   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.847699   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.847786   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.859859   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.347717   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:07.347806   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:07.360175   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.360211   49715 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:07.360223   49715 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:07.360234   49715 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:07.360304   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:07.400269   49715 cri.go:89] found id: ""
	I0213 23:09:07.400360   49715 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:07.416990   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:07.426513   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:07.426588   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436165   49715 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436197   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:07.602305   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:03.467176   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:05.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.614199   49443 pod_ready.go:92] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:04.614230   49443 pod_ready.go:81] duration metric: took 5.508903545s waiting for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:04.614244   49443 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:06.621198   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:08.622226   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:07.807018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:07.807577   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:07.807609   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:07.807519   50565 retry.go:31] will retry after 4.623412618s: waiting for machine to come up
	I0213 23:09:08.566096   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.757816   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.894570   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.984493   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:08.984609   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.485363   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.984792   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.485221   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.985649   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.485311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.516028   49715 api_server.go:72] duration metric: took 2.531534981s to wait for apiserver process to appear ...
	I0213 23:09:11.516054   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:11.516076   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:08.466006   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.965586   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.623965   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.623991   49443 pod_ready.go:81] duration metric: took 6.009738992s waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.624002   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631790   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.631813   49443 pod_ready.go:81] duration metric: took 7.802592ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631830   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638042   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.638065   49443 pod_ready.go:81] duration metric: took 6.226067ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638077   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645111   49443 pod_ready.go:92] pod "kube-proxy-swxwt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.645135   49443 pod_ready.go:81] duration metric: took 7.051124ms waiting for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645146   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651681   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.651703   49443 pod_ready.go:81] duration metric: took 6.550486ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651712   49443 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:12.659172   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:12.435133   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435720   49036 main.go:141] libmachine: (old-k8s-version-245122) Found IP for machine: 192.168.50.36
	I0213 23:09:12.435751   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has current primary IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435762   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserving static IP address...
	I0213 23:09:12.436196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.436241   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | skip adding static IP to network mk-old-k8s-version-245122 - found existing host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"}
	I0213 23:09:12.436262   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserved static IP address: 192.168.50.36
	I0213 23:09:12.436280   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting for SSH to be available...
	I0213 23:09:12.436296   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Getting to WaitForSSH function...
	I0213 23:09:12.438534   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.438892   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.438925   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.439062   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH client type: external
	I0213 23:09:12.439099   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa (-rw-------)
	I0213 23:09:12.439149   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:09:12.439183   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | About to run SSH command:
	I0213 23:09:12.439202   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | exit 0
	I0213 23:09:12.541930   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | SSH cmd err, output: <nil>: 
	I0213 23:09:12.542357   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetConfigRaw
	I0213 23:09:12.543071   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.546226   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546714   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.546747   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546955   49036 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/config.json ...
	I0213 23:09:12.547163   49036 machine.go:88] provisioning docker machine ...
	I0213 23:09:12.547200   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:12.547445   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547594   49036 buildroot.go:166] provisioning hostname "old-k8s-version-245122"
	I0213 23:09:12.547615   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547770   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.550250   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.550734   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550939   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.551160   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551322   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.551648   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.551974   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.552000   49036 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245122 && echo "old-k8s-version-245122" | sudo tee /etc/hostname
	I0213 23:09:12.705495   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245122
	
	I0213 23:09:12.705528   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.708503   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.708860   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.708893   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.709092   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.709277   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709657   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.709831   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.710263   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.710285   49036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245122/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:09:12.858225   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:09:12.858266   49036 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:09:12.858287   49036 buildroot.go:174] setting up certificates
	I0213 23:09:12.858300   49036 provision.go:83] configureAuth start
	I0213 23:09:12.858313   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.858624   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.861374   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861727   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.861759   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.864007   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864334   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.864370   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864549   49036 provision.go:138] copyHostCerts
	I0213 23:09:12.864627   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:09:12.864643   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:09:12.864728   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:09:12.864853   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:09:12.864868   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:09:12.864904   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:09:12.865008   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:09:12.865018   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:09:12.865049   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:09:12.865130   49036 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245122 san=[192.168.50.36 192.168.50.36 localhost 127.0.0.1 minikube old-k8s-version-245122]
	I0213 23:09:12.938444   49036 provision.go:172] copyRemoteCerts
	I0213 23:09:12.938508   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:09:12.938530   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.941384   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.941758   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941989   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.942202   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.942394   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.942545   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.041212   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:09:13.069849   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 23:09:13.092979   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:09:13.115949   49036 provision.go:86] duration metric: configureAuth took 257.625697ms
	I0213 23:09:13.115983   49036 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:09:13.116196   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:13.116279   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.119207   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119644   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.119684   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119901   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.120096   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120288   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120443   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.120599   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.121149   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.121179   49036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:09:13.453399   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:09:13.453431   49036 machine.go:91] provisioned docker machine in 906.25243ms
	I0213 23:09:13.453444   49036 start.go:300] post-start starting for "old-k8s-version-245122" (driver="kvm2")
	I0213 23:09:13.453459   49036 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:09:13.453479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.453816   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:09:13.453849   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.457033   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457355   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.457388   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457560   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.457778   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.457991   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.458207   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.559903   49036 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:09:13.566012   49036 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:09:13.566046   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:09:13.566119   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:09:13.566215   49036 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:09:13.566336   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:09:13.578878   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:13.610396   49036 start.go:303] post-start completed in 156.935564ms
	I0213 23:09:13.610434   49036 fix.go:56] fixHost completed within 25.25543712s
	I0213 23:09:13.610459   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.613960   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614271   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.614330   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614575   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.614828   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615081   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615275   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.615494   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.615954   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.615977   49036 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:09:13.759068   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865753.693690059
	
	I0213 23:09:13.759095   49036 fix.go:206] guest clock: 1707865753.693690059
	I0213 23:09:13.759106   49036 fix.go:219] Guest: 2024-02-13 23:09:13.693690059 +0000 UTC Remote: 2024-02-13 23:09:13.610438113 +0000 UTC m=+362.380845041 (delta=83.251946ms)
	I0213 23:09:13.759130   49036 fix.go:190] guest clock delta is within tolerance: 83.251946ms
	I0213 23:09:13.759136   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 25.404173426s
	I0213 23:09:13.759161   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.759480   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:13.762537   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.762928   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.762967   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.763172   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763718   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763907   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763998   49036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:09:13.764050   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.764122   49036 ssh_runner.go:195] Run: cat /version.json
	I0213 23:09:13.764149   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.767081   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767387   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767526   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767558   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767736   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.767812   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767834   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.768002   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.768190   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768220   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768343   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768370   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.768490   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.886145   49036 ssh_runner.go:195] Run: systemctl --version
	I0213 23:09:13.892222   49036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:09:14.044107   49036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:09:14.051031   49036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:09:14.051134   49036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:09:14.071908   49036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:09:14.071942   49036 start.go:475] detecting cgroup driver to use...
	I0213 23:09:14.072026   49036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:09:14.091007   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:09:14.105419   49036 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:09:14.105501   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:09:14.120760   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:09:14.135296   49036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:09:14.267338   49036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:09:14.403936   49036 docker.go:233] disabling docker service ...
	I0213 23:09:14.404023   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:09:14.419791   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:09:14.434449   49036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:09:14.569365   49036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:09:14.700619   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:09:14.718646   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:09:14.738870   49036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0213 23:09:14.738944   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.750436   49036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:09:14.750529   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.762397   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.773950   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.786798   49036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:09:14.801457   49036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:09:14.813254   49036 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:09:14.813331   49036 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:09:14.830374   49036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:09:14.840984   49036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:09:14.994777   49036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:09:15.193564   49036 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:09:15.193657   49036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:09:15.200616   49036 start.go:543] Will wait 60s for crictl version
	I0213 23:09:15.200749   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:15.205888   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:09:15.249751   49036 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:09:15.249884   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.302320   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.361046   49036 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0213 23:09:15.362396   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:15.365548   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366008   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:15.366041   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366287   49036 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:09:15.370727   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:15.384064   49036 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:09:15.384171   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:15.432027   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:15.432110   49036 ssh_runner.go:195] Run: which lz4
	I0213 23:09:15.436393   49036 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:09:15.440914   49036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:09:15.440956   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0213 23:09:15.218410   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:15.218442   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:15.218457   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.346077   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.346112   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:15.516188   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.523339   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.523371   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.016747   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.024910   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.024944   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.516538   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.528640   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.528673   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:17.016269   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:17.022413   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:09:17.033775   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:09:17.033807   49715 api_server.go:131] duration metric: took 5.51774459s to wait for apiserver health ...
	I0213 23:09:17.033819   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:09:17.033828   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:17.035635   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:17.037195   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:17.064472   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:17.115519   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:17.133771   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:09:17.133887   49715 system_pods.go:61] "coredns-5dd5756b68-cvtjg" [507ded52-9061-4ab7-8298-31847da5dad3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:09:17.133914   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [2ef46644-d4d0-4e8c-b2aa-4e154780be70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:09:17.133952   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [c1f51407-cfd9-4329-9153-2dacb87952c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:09:17.133975   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [1ad24825-8c75-4220-a316-2dd4826da8fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:09:17.133995   49715 system_pods.go:61] "kube-proxy-zzskr" [fb71ceb1-9f9a-4c8b-ae1e-1eeb91706110] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:09:17.134015   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [4500697c-7313-4217-9843-14edb2c7fdb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:09:17.134042   49715 system_pods.go:61] "metrics-server-57f55c9bc5-p97jh" [dc549bc9-87e4-4cb6-99b5-e937f2916d6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:09:17.134063   49715 system_pods.go:61] "storage-provisioner" [c5ad957d-09f9-46e7-b0e7-e7c0b13f671f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:09:17.134081   49715 system_pods.go:74] duration metric: took 18.533785ms to wait for pod list to return data ...
	I0213 23:09:17.134103   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:17.145025   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:17.145131   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:17.145159   49715 node_conditions.go:105] duration metric: took 11.041762ms to run NodePressure ...
	I0213 23:09:17.145201   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:13.466367   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:15.966324   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:14.661158   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:16.663448   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:19.164418   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.224597   49036 crio.go:444] Took 1.788234 seconds to copy over tarball
	I0213 23:09:17.224685   49036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:09:20.618866   49036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.394137292s)
	I0213 23:09:20.618905   49036 crio.go:451] Took 3.394273 seconds to extract the tarball
	I0213 23:09:20.618918   49036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:09:20.665417   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:20.718004   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:20.718036   49036 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.718175   49036 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.718201   49036 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.718126   49036 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.718148   49036 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.718154   49036 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.718181   49036 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719739   49036 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719784   49036 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.719745   49036 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.719855   49036 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.719951   49036 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.720062   49036 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 23:09:20.720172   49036 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.720184   49036 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.877532   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.894803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.906336   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.909341   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.910608   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 23:09:20.933612   49036 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 23:09:20.933664   49036 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.933724   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:20.947803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.979922   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.026909   49036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 23:09:21.026953   49036 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.026986   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.034243   49036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 23:09:21.034279   49036 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.034321   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.053547   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:21.068143   49036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 23:09:21.068194   49036 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 23:09:21.068228   49036 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0213 23:09:21.068195   49036 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0213 23:09:21.068318   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.110630   49036 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 23:09:21.110695   49036 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.110747   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.120732   49036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 23:09:21.120777   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.120781   49036 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.120851   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.120887   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.272660   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0213 23:09:21.272723   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 23:09:21.272771   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.272813   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.272858   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 23:09:21.272914   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.272966   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 23:09:17.706218   49715 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713293   49715 kubeadm.go:787] kubelet initialised
	I0213 23:09:17.713322   49715 kubeadm.go:788] duration metric: took 7.076014ms waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713332   49715 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:17.724146   49715 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:19.733686   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.412892   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.970757   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:20.466081   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.467149   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.660264   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:23.660813   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.375314   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 23:09:21.376306   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 23:09:21.376453   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 23:09:21.376491   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 23:09:21.585135   49036 cache_images.go:92] LoadImages completed in 867.071904ms
	W0213 23:09:21.585230   49036 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0213 23:09:21.585316   49036 ssh_runner.go:195] Run: crio config
	I0213 23:09:21.650741   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:21.650767   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:21.650789   49036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:09:21.650812   49036 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245122 NodeName:old-k8s-version-245122 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 23:09:21.650991   49036 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-245122"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-245122
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.36:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:09:21.651106   49036 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-245122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:09:21.651173   49036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 23:09:21.662478   49036 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:09:21.662558   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:09:21.672654   49036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0213 23:09:21.690609   49036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:09:21.708199   49036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0213 23:09:21.728361   49036 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0213 23:09:21.732450   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:21.747349   49036 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122 for IP: 192.168.50.36
	I0213 23:09:21.747391   49036 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:21.747532   49036 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:09:21.747582   49036 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:09:21.747644   49036 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.key
	I0213 23:09:21.958574   49036 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key.e3c4a843
	I0213 23:09:21.958790   49036 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key
	I0213 23:09:21.958978   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:09:21.959024   49036 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:09:21.959040   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:09:21.959090   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:09:21.959135   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:09:21.959168   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:09:21.959234   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:21.960121   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:09:21.986921   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:09:22.011993   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:09:22.038194   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:09:22.064839   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:09:22.089629   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:09:22.116404   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:09:22.141615   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:09:22.167298   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:09:22.194577   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:09:22.220140   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:09:22.245124   49036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:09:22.265798   49036 ssh_runner.go:195] Run: openssl version
	I0213 23:09:22.273510   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:09:22.287657   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294180   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294261   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.300826   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:09:22.313535   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:09:22.324047   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329069   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329171   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.335862   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:09:22.347417   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:09:22.358082   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363477   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363536   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.369915   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:09:22.380910   49036 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:09:22.385812   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:09:22.392981   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:09:22.400722   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:09:22.409089   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:09:22.417036   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:09:22.423381   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:09:22.430098   49036 kubeadm.go:404] StartCluster: {Name:old-k8s-version-245122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:09:22.430177   49036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:09:22.430246   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:22.490283   49036 cri.go:89] found id: ""
	I0213 23:09:22.490371   49036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:09:22.500902   49036 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:09:22.500931   49036 kubeadm.go:636] restartCluster start
	I0213 23:09:22.501004   49036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:09:22.511985   49036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:22.513298   49036 kubeconfig.go:92] found "old-k8s-version-245122" server: "https://192.168.50.36:8443"
	I0213 23:09:22.516673   49036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:09:22.526466   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:22.526561   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:22.539541   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.027052   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.027161   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.039390   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.527142   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.527234   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.539846   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.027048   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.027144   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.038367   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.526911   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.527012   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.538906   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.027095   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.027195   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.038232   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.526805   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.526911   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.540281   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:26.026811   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.026908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.039699   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.238007   49715 pod_ready.go:92] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:23.238035   49715 pod_ready.go:81] duration metric: took 5.513854942s waiting for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:23.238051   49715 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.744985   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:24.745007   49715 pod_ready.go:81] duration metric: took 1.506948533s waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.745015   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:26.751610   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:24.965048   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:27.465069   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.159564   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:28.660224   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.527051   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.527135   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.539382   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.026915   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.026990   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.038660   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.527300   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.527391   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.539714   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.027042   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.027124   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.039419   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.527549   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.527649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.540659   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.027032   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.027134   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.038415   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.526595   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.526690   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.538928   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.027041   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.027119   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.040125   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.526693   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.526765   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.540060   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:31.026988   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.027096   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.039327   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.755419   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.254128   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.254154   49715 pod_ready.go:81] duration metric: took 6.509132102s waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.254164   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262007   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.262032   49715 pod_ready.go:81] duration metric: took 7.859557ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262042   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267937   49715 pod_ready.go:92] pod "kube-proxy-zzskr" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.267959   49715 pod_ready.go:81] duration metric: took 5.911683ms waiting for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267967   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273442   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.273462   49715 pod_ready.go:81] duration metric: took 5.488135ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273471   49715 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:29.466908   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.965093   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.159176   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.159463   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.526738   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.526879   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.539174   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.026678   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.026780   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.039078   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.527030   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.527120   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.539058   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.539094   49036 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:32.539105   49036 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:32.539116   49036 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:32.539188   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:32.583832   49036 cri.go:89] found id: ""
	I0213 23:09:32.583931   49036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:32.600343   49036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:32.609666   49036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:32.609744   49036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619068   49036 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619093   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:32.751642   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:33.784796   49036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03311496s)
	I0213 23:09:33.784825   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.013311   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.172539   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.290655   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:34.290759   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:34.791649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.290908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.791035   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:33.283651   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.798120   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.966930   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.465311   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.160502   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:37.163077   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.291009   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.791117   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.809796   49036 api_server.go:72] duration metric: took 2.519141205s to wait for apiserver process to appear ...
	I0213 23:09:36.809851   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:36.809880   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:38.282180   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.282368   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:38.466126   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.967293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.811101   49036 api_server.go:269] stopped: https://192.168.50.36:8443/healthz: Get "https://192.168.50.36:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 23:09:41.811184   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.485465   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.485495   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.485516   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.539632   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.539667   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.809967   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.823007   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:42.823043   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.310359   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.318326   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:43.318384   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.809942   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.816666   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:09:43.824593   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:09:43.824622   49036 api_server.go:131] duration metric: took 7.014763564s to wait for apiserver health ...
	I0213 23:09:43.824639   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:43.824647   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:43.826660   49036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:39.659667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.660321   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.664984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.827993   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:43.837268   49036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:43.855659   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:43.864719   49036 system_pods.go:59] 7 kube-system pods found
	I0213 23:09:43.864756   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:09:43.864764   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:09:43.864770   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:09:43.864778   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Pending
	I0213 23:09:43.864783   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:09:43.864789   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:09:43.864795   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:09:43.864803   49036 system_pods.go:74] duration metric: took 9.113954ms to wait for pod list to return data ...
	I0213 23:09:43.864812   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:43.872183   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:43.872222   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:43.872237   49036 node_conditions.go:105] duration metric: took 7.415138ms to run NodePressure ...
	I0213 23:09:43.872269   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:44.129786   49036 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134864   49036 kubeadm.go:787] kubelet initialised
	I0213 23:09:44.134891   49036 kubeadm.go:788] duration metric: took 5.071047ms waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134901   49036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:44.139027   49036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.143942   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143967   49036 pod_ready.go:81] duration metric: took 4.910454ms waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.143978   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143986   49036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.147838   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147923   49036 pod_ready.go:81] duration metric: took 3.927311ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.147935   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147944   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.152465   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152490   49036 pod_ready.go:81] duration metric: took 4.536109ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.152500   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152508   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.259273   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259309   49036 pod_ready.go:81] duration metric: took 106.789068ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.259325   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259334   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.659385   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659423   49036 pod_ready.go:81] duration metric: took 400.079528ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.659436   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659443   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:45.065474   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065510   49036 pod_ready.go:81] duration metric: took 406.055078ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:45.065524   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065533   49036 pod_ready.go:38] duration metric: took 930.621868ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:45.065555   49036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:09:45.100009   49036 ops.go:34] apiserver oom_adj: -16
	I0213 23:09:45.100037   49036 kubeadm.go:640] restartCluster took 22.599099367s
	I0213 23:09:45.100049   49036 kubeadm.go:406] StartCluster complete in 22.6699561s
	I0213 23:09:45.100070   49036 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.100156   49036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:09:45.103031   49036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.103315   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:09:45.103447   49036 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:09:45.103540   49036 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103562   49036 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-245122"
	I0213 23:09:45.103571   49036 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103593   49036 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:45.103603   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:45.103638   49036 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103693   49036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245122"
	W0213 23:09:45.103608   49036 addons.go:243] addon metrics-server should already be in state true
	W0213 23:09:45.103577   49036 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:09:45.103879   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104144   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104215   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104227   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.104318   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.103829   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104877   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104904   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.123332   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0213 23:09:45.123486   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0213 23:09:45.123555   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0213 23:09:45.123964   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124143   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124148   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124449   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124469   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124650   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124674   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124654   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124743   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124965   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125030   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125083   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.125564   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125567   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125598   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.125612   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.129046   49036 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-245122"
	W0213 23:09:45.129065   49036 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:09:45.129085   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.129385   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.129415   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.145900   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0213 23:09:45.146570   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.147144   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.147164   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.147448   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.147635   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.156023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.158533   49036 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:09:45.159815   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:09:45.159837   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:09:45.159862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.163799   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164445   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.164472   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164859   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.165112   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.165340   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.165523   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.166097   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0213 23:09:45.166513   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.167086   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.167111   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.167442   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.167623   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.168284   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0213 23:09:45.168855   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.169453   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.169471   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.169702   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.169992   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.171532   49036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:45.170687   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.172965   49036 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.172979   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.172983   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:09:45.173009   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.176733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177198   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.177232   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177269   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.177506   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.177675   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.177885   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.190339   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0213 23:09:45.190750   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.191239   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.191267   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.191609   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.191803   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.193470   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.193730   49036 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.193748   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:09:45.193769   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.196896   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197422   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.197459   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197745   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.197935   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.198191   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.198301   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.392787   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:09:45.392808   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:09:45.426298   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.440984   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.452209   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:09:45.452239   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:09:45.531203   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:45.531226   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:09:45.593779   49036 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 23:09:45.621016   49036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-245122" context rescaled to 1 replicas
	I0213 23:09:45.621056   49036 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:09:45.623081   49036 out.go:177] * Verifying Kubernetes components...
	I0213 23:09:45.624623   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:09:45.631546   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:46.116692   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116732   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.116735   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116736   49036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:46.116754   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117125   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117172   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117183   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117192   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117201   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117203   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117218   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117228   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117247   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117667   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117671   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117708   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117728   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117962   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117980   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140111   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.140133   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.140411   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.140441   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140431   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.228877   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.228908   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229250   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229273   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229273   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.229283   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.229293   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229523   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229538   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229558   49036 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:46.231176   49036 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:09:46.232329   49036 addons.go:505] enable addons completed in 1.128872958s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:09:42.783163   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:44.783634   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.281934   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.465665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:45.964909   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:46.160084   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.664267   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.120153   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:50.120636   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:49.781808   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.281392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.968701   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:50.465488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:51.161059   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:53.662099   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.121578   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:53.120859   49036 node_ready.go:49] node "old-k8s-version-245122" has status "Ready":"True"
	I0213 23:09:53.120885   49036 node_ready.go:38] duration metric: took 7.004121529s waiting for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:53.120896   49036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:53.129174   49036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:55.136200   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.283011   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.286197   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.964530   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.964679   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.966183   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.159475   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.160233   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:57.636373   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.137616   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.782611   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:59.465313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.465877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.660202   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.159244   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:02.635052   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:04.636231   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.284083   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.781701   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.966234   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.465225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.160136   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.160817   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.161703   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.636789   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.135398   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.135441   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.782000   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.782948   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.785161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:08.465688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:10.967225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.658937   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.661460   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.138346   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.636437   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:14.282538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.781339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.465521   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.965224   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.162065   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:18.658525   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.648838   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.137226   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:19.282514   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:21.781917   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.966716   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.464644   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.465071   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.659514   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.662481   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.636371   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.136197   49036 pod_ready.go:92] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.136234   49036 pod_ready.go:81] duration metric: took 31.007029263s waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.136249   49036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142089   49036 pod_ready.go:92] pod "etcd-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.142114   49036 pod_ready.go:81] duration metric: took 5.854061ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142127   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149372   49036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.149396   49036 pod_ready.go:81] duration metric: took 7.261015ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149409   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158342   49036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.158371   49036 pod_ready.go:81] duration metric: took 8.953577ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158384   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165154   49036 pod_ready.go:92] pod "kube-proxy-nj7qx" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.165177   49036 pod_ready.go:81] duration metric: took 6.785683ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165186   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533838   49036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.533863   49036 pod_ready.go:81] duration metric: took 368.670292ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533896   49036 pod_ready.go:38] duration metric: took 31.412988042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:10:24.533912   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:10:24.534007   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:10:24.549186   49036 api_server.go:72] duration metric: took 38.928101792s to wait for apiserver process to appear ...
	I0213 23:10:24.549217   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:10:24.549238   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:10:24.557366   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:10:24.558364   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:10:24.558387   49036 api_server.go:131] duration metric: took 9.165129ms to wait for apiserver health ...
	I0213 23:10:24.558396   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:10:24.736365   49036 system_pods.go:59] 8 kube-system pods found
	I0213 23:10:24.736396   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:24.736401   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:24.736405   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:24.736409   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:24.736413   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:24.736417   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:24.736423   49036 system_pods.go:61] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:24.736429   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:24.736437   49036 system_pods.go:74] duration metric: took 178.035411ms to wait for pod list to return data ...
	I0213 23:10:24.736444   49036 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:10:24.934360   49036 default_sa.go:45] found service account: "default"
	I0213 23:10:24.934390   49036 default_sa.go:55] duration metric: took 197.940334ms for default service account to be created ...
	I0213 23:10:24.934400   49036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:10:25.135904   49036 system_pods.go:86] 8 kube-system pods found
	I0213 23:10:25.135933   49036 system_pods.go:89] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:25.135940   49036 system_pods.go:89] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:25.135944   49036 system_pods.go:89] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:25.135949   49036 system_pods.go:89] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:25.135954   49036 system_pods.go:89] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:25.135959   49036 system_pods.go:89] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:25.135967   49036 system_pods.go:89] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:25.135973   49036 system_pods.go:89] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:25.135982   49036 system_pods.go:126] duration metric: took 201.576732ms to wait for k8s-apps to be running ...
	I0213 23:10:25.135992   49036 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:10:25.136035   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:10:25.151540   49036 system_svc.go:56] duration metric: took 15.53628ms WaitForService to wait for kubelet.
	I0213 23:10:25.151582   49036 kubeadm.go:581] duration metric: took 39.530502672s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:10:25.151608   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:10:25.333026   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:10:25.333067   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:10:25.333083   49036 node_conditions.go:105] duration metric: took 181.468311ms to run NodePressure ...
	I0213 23:10:25.333171   49036 start.go:228] waiting for startup goroutines ...
	I0213 23:10:25.333186   49036 start.go:233] waiting for cluster config update ...
	I0213 23:10:25.333200   49036 start.go:242] writing updated cluster config ...
	I0213 23:10:25.333540   49036 ssh_runner.go:195] Run: rm -f paused
	I0213 23:10:25.385974   49036 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0213 23:10:25.388225   49036 out.go:177] 
	W0213 23:10:25.389965   49036 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0213 23:10:25.391288   49036 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0213 23:10:25.392550   49036 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-245122" cluster and "default" namespace by default
	I0213 23:10:24.281840   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.782341   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.467427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.965363   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:25.158811   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:27.158903   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.162245   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.283592   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.781156   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.465534   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.965570   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.163299   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.664184   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:34.281475   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.282050   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.966548   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.465588   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.159425   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.161056   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.781806   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.782565   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.465618   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.966613   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.659031   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.660105   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:43.282453   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.782436   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.967065   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.465277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.161783   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.659092   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:48.281903   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:50.782326   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.965978   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.972688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:52.464489   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.661150   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:51.661183   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.159746   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:53.280877   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:55.281432   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.465386   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.966020   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.659863   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.161127   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:57.781250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:00.283244   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.464959   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.466871   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.660636   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:04.162081   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:02.782971   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.282593   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:03.964986   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.967545   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:06.660761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.663916   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:07.783437   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.280975   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.281595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.466954   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.965354   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:11.159761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:13.160656   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:14.281819   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:16.781331   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.965830   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.464980   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.659894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.659996   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:18.782849   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.281343   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.965490   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.965841   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:22.465427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.660194   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.660348   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.158929   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:23.281731   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:25.282299   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.966008   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.463392   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:26.160687   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:28.160792   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.783770   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.282652   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:29.464941   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:31.965436   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.160850   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.661971   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.781595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.282110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:33.966260   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:36.465148   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.160093   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.160571   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.782870   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.281536   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:38.466898   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.965121   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:39.659930   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.160848   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.782134   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.287871   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.966494   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:45.465485   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.477988   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.659259   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:46.660566   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.165414   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.781501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.282150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.965827   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.465337   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:51.658915   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.160444   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.286142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.783072   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.465900   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.466029   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.659103   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.660419   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.784481   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.282749   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.965179   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.465662   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:00.661165   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.161035   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.787946   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:06.281932   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.964460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.966240   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.660384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.159544   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.781709   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.782556   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.465300   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.472665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.660651   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.159097   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.281500   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.781953   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:12.965510   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:14.966435   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.465559   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.160583   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.659605   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.784167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:20.280384   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:22.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.468825   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.965088   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.659644   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.662561   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.160923   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.781351   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:27.281938   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:23.966646   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.465094   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.160986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.161300   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:29.780690   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.282298   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.965450   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:31.467937   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:30.659169   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.659681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.782495   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.782679   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:33.965594   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.465409   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.660174   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.660802   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.160838   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.281205   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.281734   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:38.465702   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:40.965477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.659732   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:44.159873   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:43.780979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.781438   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:42.966342   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.464993   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.465742   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:46.162330   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:48.659964   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.782513   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:50.281255   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:52.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:49.967402   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.968499   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.161451   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:53.659594   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.782653   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.782779   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.465429   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.466199   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:55.659986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:57.661028   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:59.280842   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.281110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:58.965410   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:00.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.458755   49120 pod_ready.go:81] duration metric: took 4m0.00109163s waiting for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:01.458812   49120 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:01.458839   49120 pod_ready.go:38] duration metric: took 4m13.051566827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:01.458873   49120 kubeadm.go:640] restartCluster took 4m33.496925279s
	W0213 23:13:01.458967   49120 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:01.459008   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:00.160188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:02.663549   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:03.285939   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.782469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.165196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:07.661417   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:08.283394   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.286257   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.161461   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.652828   49443 pod_ready.go:81] duration metric: took 4m0.001101625s waiting for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:10.652857   49443 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:10.652877   49443 pod_ready.go:38] duration metric: took 4m11.564476633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:10.652905   49443 kubeadm.go:640] restartCluster took 4m34.344806193s
	W0213 23:13:10.652970   49443 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:10.652997   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:12.782042   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:15.282782   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:16.418651   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.959611919s)
	I0213 23:13:16.418750   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:16.435137   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:16.448436   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:16.459777   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:16.459826   49120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:16.708111   49120 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:17.782474   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:20.283238   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:22.782418   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:24.782894   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:26.784203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:28.667785   49120 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0213 23:13:28.667865   49120 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:28.668000   49120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:28.668151   49120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:28.668282   49120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:28.668372   49120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:28.670147   49120 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:28.670266   49120 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:28.670367   49120 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:28.670480   49120 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:28.670559   49120 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:28.670674   49120 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:28.670763   49120 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:28.670864   49120 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:28.670964   49120 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:28.671068   49120 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:28.671163   49120 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:28.671221   49120 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:28.671296   49120 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:28.671368   49120 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:28.671440   49120 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0213 23:13:28.671506   49120 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:28.671580   49120 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:28.671658   49120 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:28.671734   49120 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:28.671791   49120 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:28.673351   49120 out.go:204]   - Booting up control plane ...
	I0213 23:13:28.673448   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:28.673535   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:28.673627   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:28.673744   49120 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:28.673846   49120 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:28.673903   49120 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:28.674084   49120 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:28.674176   49120 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.010705 seconds
	I0213 23:13:28.674315   49120 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:28.674470   49120 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:28.674543   49120 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:28.674766   49120 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-778731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:28.674832   49120 kubeadm.go:322] [bootstrap-token] Using token: dwjaqi.e4fr4bxqfdq63m9e
	I0213 23:13:28.676266   49120 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:28.676392   49120 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:28.676495   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:28.676671   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:28.676871   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:28.677028   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:28.677142   49120 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:28.677283   49120 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:28.677337   49120 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:28.677392   49120 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:28.677405   49120 kubeadm.go:322] 
	I0213 23:13:28.677476   49120 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:28.677488   49120 kubeadm.go:322] 
	I0213 23:13:28.677586   49120 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:28.677599   49120 kubeadm.go:322] 
	I0213 23:13:28.677631   49120 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:28.677712   49120 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:28.677780   49120 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:28.677793   49120 kubeadm.go:322] 
	I0213 23:13:28.677864   49120 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:28.677881   49120 kubeadm.go:322] 
	I0213 23:13:28.677941   49120 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:28.677948   49120 kubeadm.go:322] 
	I0213 23:13:28.678019   49120 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:28.678125   49120 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:28.678215   49120 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:28.678223   49120 kubeadm.go:322] 
	I0213 23:13:28.678324   49120 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:28.678426   49120 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:28.678433   49120 kubeadm.go:322] 
	I0213 23:13:28.678544   49120 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.678685   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:28.678714   49120 kubeadm.go:322] 	--control-plane 
	I0213 23:13:28.678722   49120 kubeadm.go:322] 
	I0213 23:13:28.678834   49120 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:28.678841   49120 kubeadm.go:322] 
	I0213 23:13:28.678950   49120 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.679094   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:28.679106   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:13:28.679116   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:28.680826   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:25.241610   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.588591305s)
	I0213 23:13:25.241679   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:25.257221   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:25.271651   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:25.285556   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:25.285615   49443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:25.530438   49443 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:29.281713   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:31.274625   49715 pod_ready.go:81] duration metric: took 4m0.00114055s waiting for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:31.274654   49715 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:31.274676   49715 pod_ready.go:38] duration metric: took 4m13.561333764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:31.274700   49715 kubeadm.go:640] restartCluster took 4m33.95094669s
	W0213 23:13:31.274766   49715 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:31.274807   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:28.682020   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:28.710027   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:28.752989   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:28.753118   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:28.753117   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=no-preload-778731 minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.147657   49120 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:29.147806   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.647920   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.648105   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.148819   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.648877   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.647939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.005257   49443 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:37.005340   49443 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:37.005464   49443 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:37.005611   49443 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:37.005750   49443 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:37.005836   49443 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:37.007501   49443 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:37.007606   49443 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:37.007687   49443 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:37.007782   49443 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:37.007869   49443 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:37.007960   49443 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:37.008047   49443 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:37.008139   49443 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:37.008221   49443 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:37.008324   49443 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:37.008437   49443 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:37.008488   49443 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:37.008577   49443 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:37.008657   49443 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:37.008742   49443 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:37.008837   49443 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:37.008916   49443 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:37.009044   49443 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:37.009150   49443 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:37.010808   49443 out.go:204]   - Booting up control plane ...
	I0213 23:13:37.010943   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:37.011053   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:37.011155   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:37.011537   49443 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:37.011661   49443 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:37.011720   49443 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:37.011915   49443 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:37.012024   49443 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005842 seconds
	I0213 23:13:37.012154   49443 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:37.012297   49443 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:37.012376   49443 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:37.012595   49443 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-340656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:37.012668   49443 kubeadm.go:322] [bootstrap-token] Using token: 0y2cx5.j4vucgv3wtut6xkw
	I0213 23:13:37.014296   49443 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:37.014433   49443 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:37.014535   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:37.014697   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:37.014837   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:37.014966   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:37.015073   49443 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:37.015203   49443 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:37.015256   49443 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:37.015316   49443 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:37.015326   49443 kubeadm.go:322] 
	I0213 23:13:37.015393   49443 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:37.015403   49443 kubeadm.go:322] 
	I0213 23:13:37.015500   49443 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:37.015511   49443 kubeadm.go:322] 
	I0213 23:13:37.015535   49443 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:37.015603   49443 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:37.015668   49443 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:37.015677   49443 kubeadm.go:322] 
	I0213 23:13:37.015744   49443 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:37.015754   49443 kubeadm.go:322] 
	I0213 23:13:37.015814   49443 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:37.015824   49443 kubeadm.go:322] 
	I0213 23:13:37.015889   49443 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:37.015981   49443 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:37.016075   49443 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:37.016087   49443 kubeadm.go:322] 
	I0213 23:13:37.016182   49443 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:37.016272   49443 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:37.016282   49443 kubeadm.go:322] 
	I0213 23:13:37.016371   49443 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016486   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:37.016522   49443 kubeadm.go:322] 	--control-plane 
	I0213 23:13:37.016527   49443 kubeadm.go:322] 
	I0213 23:13:37.016637   49443 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:37.016643   49443 kubeadm.go:322] 
	I0213 23:13:37.016739   49443 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016875   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:37.016887   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:13:37.016895   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:37.018483   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:33.148023   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:33.648861   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.147939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.648160   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.148620   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.648710   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.148263   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.648202   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.148597   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.648067   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.019795   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:37.080689   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:37.145132   49443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:37.145273   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.145374   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=embed-certs-340656 minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.195322   49443 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:37.575387   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.075523   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.575550   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.075996   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.148294   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.648747   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.148671   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.648021   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.148566   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.648799   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.148354   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.257502   49120 kubeadm.go:1088] duration metric: took 12.504501087s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:41.257549   49120 kubeadm.go:406] StartCluster complete in 5m13.347836612s
	I0213 23:13:41.257573   49120 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.257681   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:41.260299   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.260647   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:41.260677   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:41.260755   49120 addons.go:69] Setting storage-provisioner=true in profile "no-preload-778731"
	I0213 23:13:41.260779   49120 addons.go:234] Setting addon storage-provisioner=true in "no-preload-778731"
	W0213 23:13:41.260787   49120 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:41.260777   49120 addons.go:69] Setting metrics-server=true in profile "no-preload-778731"
	I0213 23:13:41.260807   49120 addons.go:234] Setting addon metrics-server=true in "no-preload-778731"
	W0213 23:13:41.260815   49120 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:41.260840   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260858   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260882   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:13:41.261207   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261227   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261267   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261291   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261426   49120 addons.go:69] Setting default-storageclass=true in profile "no-preload-778731"
	I0213 23:13:41.261447   49120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-778731"
	I0213 23:13:41.261807   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261899   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.278449   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0213 23:13:41.278646   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0213 23:13:41.278874   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.278992   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.279367   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279389   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279460   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279485   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279748   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.279929   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.280301   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280345   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280389   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280403   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0213 23:13:41.280420   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280729   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.281302   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.281324   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.281723   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.281932   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.286017   49120 addons.go:234] Setting addon default-storageclass=true in "no-preload-778731"
	W0213 23:13:41.286039   49120 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:41.286067   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.286476   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.286511   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.299018   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0213 23:13:41.299266   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0213 23:13:41.299626   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.299951   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.300111   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300127   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300624   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300656   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300707   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.300885   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.301280   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.301628   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.303270   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.304846   49120 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:41.303809   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.306034   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:41.306048   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:41.306068   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.307731   49120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:41.309028   49120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.309045   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:41.309065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.309214   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0213 23:13:41.309635   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.309722   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310208   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.310227   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.310342   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.310379   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310514   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.310731   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.310877   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.310900   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.311093   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.311466   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.311516   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.312194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312559   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.312580   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312814   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.313006   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.313140   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.313283   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.327021   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0213 23:13:41.327605   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.328038   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.328055   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.328399   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.328596   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.330082   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.330333   49120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.330344   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:41.330356   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.333321   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333703   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.333731   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.334075   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.334494   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.334643   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.502879   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:41.534876   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:41.534908   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:41.587429   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.589619   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.616755   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:41.616783   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:41.688015   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.688039   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:41.777647   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.844418   49120 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-778731" context rescaled to 1 replicas
	I0213 23:13:41.844460   49120 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:41.847252   49120 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:41.848614   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:42.311509   49120 start.go:929] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:42.915046   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.327574246s)
	I0213 23:13:42.915112   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915127   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915219   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.325575731s)
	I0213 23:13:42.915241   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915250   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915430   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.915467   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.915475   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.915485   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915493   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917607   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917640   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917673   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917652   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917719   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917730   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917764   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.917773   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917996   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.918014   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.963310   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.963336   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.963632   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.963652   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999467   49120 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.150816624s)
	I0213 23:13:42.999513   49120 node_ready.go:35] waiting up to 6m0s for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:42.999542   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221849263s)
	I0213 23:13:42.999604   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999620   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.999914   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.999932   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999944   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999953   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:43.000322   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:43.000341   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:43.000355   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:43.000372   49120 addons.go:470] Verifying addon metrics-server=true in "no-preload-778731"
	I0213 23:13:43.003022   49120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:39.575883   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.076191   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.575969   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.075959   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.576297   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.075511   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.575528   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.076112   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.575825   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:44.076340   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.156104   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.881268834s)
	I0213 23:13:46.156183   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:46.173816   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:46.185578   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:46.196865   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:46.196911   49715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:46.251785   49715 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:46.251863   49715 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:46.416331   49715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:46.416503   49715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:46.416643   49715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:46.690351   49715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:46.692352   49715 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:46.692470   49715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:46.692583   49715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:46.692710   49715 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:46.692812   49715 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:46.692929   49715 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:46.693027   49715 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:46.693116   49715 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:46.693220   49715 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:46.693322   49715 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:46.693423   49715 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:46.693480   49715 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:46.693559   49715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:46.919270   49715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:47.096236   49715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:47.207058   49715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:47.262083   49715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:47.262614   49715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:47.265288   49715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:47.267143   49715 out.go:204]   - Booting up control plane ...
	I0213 23:13:47.267277   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:47.267383   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:47.267570   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:47.284718   49715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:47.286027   49715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:47.286152   49715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:47.443974   49715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:43.004170   49120 addons.go:505] enable addons completed in 1.743494195s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:43.030538   49120 node_ready.go:49] node "no-preload-778731" has status "Ready":"True"
	I0213 23:13:43.030566   49120 node_ready.go:38] duration metric: took 31.039482ms waiting for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:43.030581   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:43.041854   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:43.085259   49120 pod_ready.go:97] pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085310   49120 pod_ready.go:81] duration metric: took 43.414984ms waiting for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:43.085328   49120 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085337   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094656   49120 pod_ready.go:92] pod "coredns-76f75df574-f4g5w" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.094686   49120 pod_ready.go:81] duration metric: took 2.009341273s waiting for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094696   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101331   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.101352   49120 pod_ready.go:81] duration metric: took 6.650644ms waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101362   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108662   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.108686   49120 pod_ready.go:81] duration metric: took 7.317621ms waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108695   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115600   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.115620   49120 pod_ready.go:81] duration metric: took 6.918739ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115629   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403942   49120 pod_ready.go:92] pod "kube-proxy-7vcqq" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.403977   49120 pod_ready.go:81] duration metric: took 288.33703ms waiting for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403990   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804609   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.804646   49120 pod_ready.go:81] duration metric: took 400.646621ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804661   49120 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:44.575423   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.076435   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.575498   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.076393   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.575716   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.075439   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.575623   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.076149   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.575619   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.757507   49443 kubeadm.go:1088] duration metric: took 11.612278698s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:48.757567   49443 kubeadm.go:406] StartCluster complete in 5m12.504615736s
	I0213 23:13:48.757592   49443 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.757689   49443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:48.760402   49443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.760794   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:48.761145   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:13:48.761320   49443 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:48.761392   49443 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-340656"
	I0213 23:13:48.761411   49443 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-340656"
	W0213 23:13:48.761420   49443 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:48.761470   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762064   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762094   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762173   49443 addons.go:69] Setting default-storageclass=true in profile "embed-certs-340656"
	I0213 23:13:48.762208   49443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-340656"
	I0213 23:13:48.762334   49443 addons.go:69] Setting metrics-server=true in profile "embed-certs-340656"
	I0213 23:13:48.762359   49443 addons.go:234] Setting addon metrics-server=true in "embed-certs-340656"
	W0213 23:13:48.762368   49443 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:48.762418   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762605   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762642   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762770   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762812   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.782845   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0213 23:13:48.782988   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0213 23:13:48.782993   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0213 23:13:48.783453   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783578   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783583   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.784018   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784038   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784160   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784177   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784197   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784211   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784431   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784636   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.784704   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784781   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.785241   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785264   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.785910   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785952   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.795703   49443 addons.go:234] Setting addon default-storageclass=true in "embed-certs-340656"
	W0213 23:13:48.795803   49443 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:48.795847   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.796295   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.796352   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.805562   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0213 23:13:48.806234   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.815444   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0213 23:13:48.815451   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.815558   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.817565   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.817770   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.818164   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.818796   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.818815   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.819308   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0213 23:13:48.819537   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.819661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.819723   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.821798   49443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:48.820119   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.821685   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.823106   49443 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:48.823122   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:48.823142   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.824803   49443 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:48.826431   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.826467   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:48.826487   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:48.826507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.826393   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.826536   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.827054   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.827129   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.827155   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.827617   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.828067   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.828089   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.828119   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.828335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.828539   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.830417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.831572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.831604   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.832609   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.832827   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.832999   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.833165   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.851188   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0213 23:13:48.851868   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.852446   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.852482   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.852913   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.853134   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.855360   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.855766   49443 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:48.855792   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:48.855810   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.859610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.859877   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.859915   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.860263   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.860507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.860699   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.860854   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:49.015561   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:49.019336   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:49.047556   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:49.047593   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:49.083994   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:49.109749   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:49.109778   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:49.196430   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.196459   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:49.297603   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.306053   49443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-340656" context rescaled to 1 replicas
	I0213 23:13:49.306112   49443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:49.307559   49443 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:49.308883   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:51.125630   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109969214s)
	I0213 23:13:51.125663   49443 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:51.492579   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473198087s)
	I0213 23:13:51.492655   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492672   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492587   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.408541587s)
	I0213 23:13:51.492794   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492820   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493027   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493041   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493052   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493061   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493362   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493392   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493401   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493458   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493492   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493501   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493511   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493520   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493768   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493791   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.550911   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.550944   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.551267   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.551319   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.728993   49443 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.420033663s)
	I0213 23:13:51.729078   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.431431547s)
	I0213 23:13:51.729114   49443 node_ready.go:35] waiting up to 6m0s for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.729135   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729163   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729446   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729462   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729473   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729483   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729770   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.729803   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729813   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729823   49443 addons.go:470] Verifying addon metrics-server=true in "embed-certs-340656"
	I0213 23:13:51.732785   49443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:47.812862   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:49.820823   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:52.318873   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:51.733634   49443 addons.go:505] enable addons completed in 2.972313278s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:51.741252   49443 node_ready.go:49] node "embed-certs-340656" has status "Ready":"True"
	I0213 23:13:51.741279   49443 node_ready.go:38] duration metric: took 12.133263ms waiting for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.741290   49443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:51.749409   49443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766298   49443 pod_ready.go:92] pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.766331   49443 pod_ready.go:81] duration metric: took 1.01688514s waiting for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766345   49443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777697   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.777725   49443 pod_ready.go:81] duration metric: took 11.371663ms waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777738   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789006   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.789030   49443 pod_ready.go:81] duration metric: took 11.286651ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789040   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798798   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.798820   49443 pod_ready.go:81] duration metric: took 9.773358ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798829   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807522   49443 pod_ready.go:92] pod "kube-proxy-4vgt5" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:53.807555   49443 pod_ready.go:81] duration metric: took 1.00871819s waiting for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807569   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133771   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:54.133808   49443 pod_ready.go:81] duration metric: took 326.228368ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133819   49443 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:55.947176   49715 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502842 seconds
	I0213 23:13:55.947340   49715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:55.968064   49715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:56.503592   49715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:56.503798   49715 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-083863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:57.020246   49715 kubeadm.go:322] [bootstrap-token] Using token: 1sfxye.gyrkuj525fbtgg0g
	I0213 23:13:57.021591   49715 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:57.021724   49715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:57.028718   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:57.038574   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:57.046578   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:57.051622   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:57.065769   49715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:57.091404   49715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:57.330768   49715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:57.436406   49715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:57.436445   49715 kubeadm.go:322] 
	I0213 23:13:57.436542   49715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:57.436556   49715 kubeadm.go:322] 
	I0213 23:13:57.436650   49715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:57.436681   49715 kubeadm.go:322] 
	I0213 23:13:57.436729   49715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:57.436813   49715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:57.436887   49715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:57.436898   49715 kubeadm.go:322] 
	I0213 23:13:57.436989   49715 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:57.437002   49715 kubeadm.go:322] 
	I0213 23:13:57.437067   49715 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:57.437078   49715 kubeadm.go:322] 
	I0213 23:13:57.437137   49715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:57.437227   49715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:57.437344   49715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:57.437365   49715 kubeadm.go:322] 
	I0213 23:13:57.437463   49715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:57.437561   49715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:57.437577   49715 kubeadm.go:322] 
	I0213 23:13:57.437713   49715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.437878   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:57.437915   49715 kubeadm.go:322] 	--control-plane 
	I0213 23:13:57.437925   49715 kubeadm.go:322] 
	I0213 23:13:57.438021   49715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:57.438032   49715 kubeadm.go:322] 
	I0213 23:13:57.438140   49715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.438284   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:57.438602   49715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:57.438886   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:13:57.438904   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:57.440968   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:57.442459   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:57.466652   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:57.538217   49715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:57.538279   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:57.538289   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=default-k8s-diff-port-083863 minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:54.320129   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.812983   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.141892   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:58.143201   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:57.914767   49715 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:57.914957   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.415274   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.915866   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.415351   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.915329   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.415646   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.915129   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.415803   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.915716   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:02.415378   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.815013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:01.312236   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:00.645227   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:03.145517   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:02.915447   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.415367   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.915183   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.416047   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.915850   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.415867   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.915570   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.415580   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.915010   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:07.415431   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.314560   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.817591   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.642499   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.644055   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.916067   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.415001   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.915359   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.415672   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.915997   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:10.105267   49715 kubeadm.go:1088] duration metric: took 12.567044904s to wait for elevateKubeSystemPrivileges.
	I0213 23:14:10.105293   49715 kubeadm.go:406] StartCluster complete in 5m12.839656692s
	I0213 23:14:10.105310   49715 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.105392   49715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:14:10.107335   49715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.107629   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:14:10.107747   49715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:14:10.107821   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:14:10.107841   49715 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107858   49715 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107866   49715 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-083863"
	I0213 23:14:10.107873   49715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-083863"
	W0213 23:14:10.107878   49715 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:14:10.107885   49715 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107905   49715 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.107917   49715 addons.go:243] addon metrics-server should already be in state true
	I0213 23:14:10.107941   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.107961   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.108282   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108352   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108368   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108382   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108392   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108355   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.124618   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0213 23:14:10.124636   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0213 23:14:10.125154   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125261   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125984   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.125990   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.126014   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126029   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126422   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126501   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126604   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.127038   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.127067   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131142   49715 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.131168   49715 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:14:10.131196   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.131628   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.131661   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131866   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0213 23:14:10.132342   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.133024   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.133044   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.133539   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.134069   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.134119   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.145244   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0213 23:14:10.145674   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.146213   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.146233   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.146642   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.146845   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.148779   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.151227   49715 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:14:10.152983   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:14:10.153004   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:14:10.150602   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0213 23:14:10.153029   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.154229   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.154857   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.154876   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.155560   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.156429   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.156476   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.156757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.157450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157680   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.157898   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.158068   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.158211   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.159437   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0213 23:14:10.159780   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.160316   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.160328   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.160712   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.160874   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.163133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.166002   49715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:14:10.168221   49715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.168239   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:14:10.168259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.172119   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172539   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.172562   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172800   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.173447   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.173609   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.173769   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.175322   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0213 23:14:10.175719   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.176212   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.176223   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.176556   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.176727   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.178938   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.179149   49715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.179163   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:14:10.179174   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.182253   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.182739   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.182773   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.183106   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.183259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.183425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.183534   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.327834   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:14:10.327857   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:14:10.362507   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.405623   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:14:10.405655   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:14:10.413284   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.427964   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:14:10.459317   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.459343   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:14:10.552860   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.687588   49715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-083863" context rescaled to 1 replicas
	I0213 23:14:10.687640   49715 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:14:10.689888   49715 out.go:177] * Verifying Kubernetes components...
	I0213 23:14:10.691656   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:14:08.312251   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:10.313161   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.313239   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.671905   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.309368382s)
	I0213 23:14:12.671963   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.258642736s)
	I0213 23:14:12.671974   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.671999   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672008   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244007691s)
	I0213 23:14:12.672048   49715 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 23:14:12.672013   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672319   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672358   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672414   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672428   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672440   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672391   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672502   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672511   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672522   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672672   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672713   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672825   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672842   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672845   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.718598   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.718635   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.718899   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.718948   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.718957   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992151   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.439242656s)
	I0213 23:14:12.992169   49715 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.300483548s)
	I0213 23:14:12.992204   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992208   49715 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:12.992219   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.992608   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.992650   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.992674   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992694   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992706   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.993012   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.993033   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.993082   49715 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-083863"
	I0213 23:14:12.994959   49715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:14:10.144369   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.642284   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.996304   49715 addons.go:505] enable addons completed in 2.888556474s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:14:13.017331   49715 node_ready.go:49] node "default-k8s-diff-port-083863" has status "Ready":"True"
	I0213 23:14:13.017356   49715 node_ready.go:38] duration metric: took 25.135832ms waiting for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:13.017369   49715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:14:13.040090   49715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047064   49715 pod_ready.go:92] pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.047105   49715 pod_ready.go:81] duration metric: took 2.006967952s waiting for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047119   49715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052773   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.052793   49715 pod_ready.go:81] duration metric: took 5.668033ms waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052801   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.057989   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.058012   49715 pod_ready.go:81] duration metric: took 5.204253ms waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.058024   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063408   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.063426   49715 pod_ready.go:81] duration metric: took 5.394681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063434   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068502   49715 pod_ready.go:92] pod "kube-proxy-kvz2b" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.068523   49715 pod_ready.go:81] duration metric: took 5.082168ms waiting for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068534   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445109   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.445132   49715 pod_ready.go:81] duration metric: took 376.590631ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445142   49715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:17.453588   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:14.816746   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.313290   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:15.141901   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.641098   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.453805   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.954116   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.812763   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.814338   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.641389   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.641735   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.142168   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.455003   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.952168   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.312468   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.813420   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.641722   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.141082   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:28.954054   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:30.954647   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.311343   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.312249   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.143011   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.642102   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.452218   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.453522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.457001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.314313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.812309   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:36.143532   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:38.640894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:39.955206   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.456339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.813776   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.314111   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.642572   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:43.141919   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:44.955150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.454324   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.813470   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.313382   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.143485   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.641760   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.954167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.453822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.814576   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:50.312600   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.313062   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.642698   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.141500   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.141646   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.454979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.953279   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.812403   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.813413   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.142104   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:58.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.453692   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.952522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.313705   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.813002   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:00.642441   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:02.644754   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.954032   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.453202   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.813780   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.312152   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:04.645545   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:07.142188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.454411   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:10.953929   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.813133   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.315282   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:09.641331   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.644066   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:14.141197   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.452937   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:15.453227   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:17.455142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.814488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.312013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.142256   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:19.956449   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.454447   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.313100   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.315124   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.642516   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:23.141725   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.955277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:26.956469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.813277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.813332   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.313503   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:25.148206   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.642527   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.453659   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:31.953193   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.812921   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.311859   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.642812   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.141177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.141385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.452179   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.454250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.312263   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.812360   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.642681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.142639   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:38.952639   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:40.953841   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.311603   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.312975   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.640004   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.641689   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:42.954046   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.453175   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.812207   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:46.313761   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.642354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.141466   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:47.953013   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.455958   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.813689   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:51.312695   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.144359   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.145852   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.952203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.960421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.455215   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:53.312858   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:55.313197   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.313493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.642775   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.142159   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.143780   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.953718   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.954907   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.813086   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:02.313743   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.640609   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:03.641712   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.453269   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:06.454001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.813366   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.313460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:05.642520   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.644309   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:08.454568   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.953538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:09.315454   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:11.814145   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.142385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.644175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.953619   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.452015   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.455884   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:14.311599   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:16.312822   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.143506   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.643647   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:19.952742   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:21.953464   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:18.314298   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.812863   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.142175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:22.641953   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.953599   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.953715   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.312368   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.813170   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:24.642939   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:27.143008   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.452587   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.454360   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.314038   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.812058   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:29.642029   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.141959   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.142628   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.955547   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:35.453428   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.456558   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.813040   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.813607   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.314673   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:36.143091   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:38.147685   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.953073   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:42.452724   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.811843   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:41.811877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:40.645177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.140828   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:44.453277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.453393   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.813703   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.312231   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:45.141859   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:47.142843   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.453508   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.456357   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.312293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.812918   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:49.641676   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.142518   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.951784   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.954108   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.455497   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:53.312477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:55.313195   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.642918   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.141241   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.141855   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.954832   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.455675   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.811554   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.813709   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.313752   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:01.142778   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:03.143196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.953816   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.953967   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.812917   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.814681   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:05.644404   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:07.644824   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.455392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.953935   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.312828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.811876   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:10.141985   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:12.642984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.453572   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.454161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.314828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.813786   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:15.143013   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:17.143864   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.144089   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:18.952608   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:20.952810   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.312837   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.316700   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.641354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:24.142975   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:22.953607   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.453091   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.454501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:23.811674   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.814225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:26.640796   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:28.642684   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:29.952519   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.453137   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.816563   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.314052   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.642932   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:33.142380   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.456778   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.459583   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.812724   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.812895   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.813814   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:35.641888   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.144690   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.952822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.956268   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.821433   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:41.313306   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.641240   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:42.641667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.453378   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.953398   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.313457   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812519   49120 pod_ready.go:81] duration metric: took 4m0.007851911s waiting for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:45.812528   49120 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:45.812534   49120 pod_ready.go:38] duration metric: took 4m2.781943239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:45.812548   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:45.812574   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:45.812640   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:45.881239   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:45.881267   49120 cri.go:89] found id: ""
	I0213 23:17:45.881277   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:45.881327   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.886446   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:45.886531   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:45.926920   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:45.926947   49120 cri.go:89] found id: ""
	I0213 23:17:45.926955   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:45.927000   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.931500   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:45.931577   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:45.979081   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:45.979109   49120 cri.go:89] found id: ""
	I0213 23:17:45.979119   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:45.979174   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.984481   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:45.984539   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:46.035365   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.035385   49120 cri.go:89] found id: ""
	I0213 23:17:46.035392   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:46.035438   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.039634   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:46.039695   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:46.087404   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:46.087429   49120 cri.go:89] found id: ""
	I0213 23:17:46.087436   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:46.087490   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.091828   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:46.091889   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:46.133625   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:46.133651   49120 cri.go:89] found id: ""
	I0213 23:17:46.133658   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:46.133710   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.138378   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:46.138456   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:46.181018   49120 cri.go:89] found id: ""
	I0213 23:17:46.181048   49120 logs.go:276] 0 containers: []
	W0213 23:17:46.181058   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:46.181065   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:46.181141   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:46.221347   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.221374   49120 cri.go:89] found id: ""
	I0213 23:17:46.221385   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:46.221448   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.226298   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:46.226331   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:46.268881   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:46.268915   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.325183   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:46.325225   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.372600   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:46.372637   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:46.791381   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:46.791438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:46.861239   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:46.861431   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:46.884969   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:46.885009   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:46.909324   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:46.909352   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:46.966664   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:46.966698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:47.030276   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:47.030321   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:47.081480   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:47.081516   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:47.238201   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:47.238238   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:47.285995   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:47.286033   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:47.332459   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332486   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:47.332566   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:47.332580   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:47.332596   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:47.332616   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332622   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:44.643384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.141032   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.953650   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:50.453421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.453501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:49.641373   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.142827   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:54.141398   49443 pod_ready.go:81] duration metric: took 4m0.007567399s waiting for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:54.141420   49443 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:54.141428   49443 pod_ready.go:38] duration metric: took 4m2.400127673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:54.141441   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:54.141464   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:54.141506   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:54.203295   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:54.203319   49443 cri.go:89] found id: ""
	I0213 23:17:54.203329   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:54.203387   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.208671   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:54.208748   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:54.254150   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:54.254183   49443 cri.go:89] found id: ""
	I0213 23:17:54.254193   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:54.254259   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.259090   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:54.259178   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:54.309365   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:54.309385   49443 cri.go:89] found id: ""
	I0213 23:17:54.309392   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:54.309436   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.315937   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:54.316014   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:54.363796   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.363855   49443 cri.go:89] found id: ""
	I0213 23:17:54.363866   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:54.363926   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.368767   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:54.368842   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:54.417590   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:54.417620   49443 cri.go:89] found id: ""
	I0213 23:17:54.417637   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:54.417696   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.422980   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:54.423053   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:54.468990   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.469019   49443 cri.go:89] found id: ""
	I0213 23:17:54.469029   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:54.469094   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.473989   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:54.474073   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:54.524124   49443 cri.go:89] found id: ""
	I0213 23:17:54.524154   49443 logs.go:276] 0 containers: []
	W0213 23:17:54.524164   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:54.524172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:54.524239   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.953845   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.459517   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.333824   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:57.351216   49120 api_server.go:72] duration metric: took 4m15.50672707s to wait for apiserver process to appear ...
	I0213 23:17:57.351245   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:57.351281   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:57.351340   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:57.405928   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:57.405956   49120 cri.go:89] found id: ""
	I0213 23:17:57.405963   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:57.406007   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.410541   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:57.410619   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:57.456843   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:57.456871   49120 cri.go:89] found id: ""
	I0213 23:17:57.456881   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:57.456940   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.461801   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:57.461852   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:57.504653   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.504690   49120 cri.go:89] found id: ""
	I0213 23:17:57.504702   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:57.504762   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.509177   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:57.509250   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:57.556672   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:57.556696   49120 cri.go:89] found id: ""
	I0213 23:17:57.556704   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:57.556747   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.561343   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:57.561399   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:57.606959   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:57.606994   49120 cri.go:89] found id: ""
	I0213 23:17:57.607005   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:57.607068   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.611356   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:57.611440   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:57.655205   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:57.655230   49120 cri.go:89] found id: ""
	I0213 23:17:57.655238   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:57.655284   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.659762   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:57.659850   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:57.699989   49120 cri.go:89] found id: ""
	I0213 23:17:57.700012   49120 logs.go:276] 0 containers: []
	W0213 23:17:57.700019   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:57.700028   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:57.700075   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.562654   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.562674   49443 cri.go:89] found id: ""
	I0213 23:17:54.562682   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:54.562745   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.567182   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:54.567209   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:54.666809   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:54.666847   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:54.818292   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:54.818324   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.878074   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:54.878108   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.938472   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:54.938509   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.985201   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:54.985235   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:54.999987   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:55.000016   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:55.058536   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:55.058573   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:55.108130   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:55.108172   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:55.154299   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:55.154327   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:55.205554   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:55.205583   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:55.615944   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:55.615987   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.179069   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:58.194968   49443 api_server.go:72] duration metric: took 4m8.888826635s to wait for apiserver process to appear ...
	I0213 23:17:58.194992   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:58.195020   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:58.195067   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:58.245997   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.246029   49443 cri.go:89] found id: ""
	I0213 23:17:58.246038   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:58.246103   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.251486   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:58.251566   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:58.299878   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:58.299909   49443 cri.go:89] found id: ""
	I0213 23:17:58.299919   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:58.299977   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.305075   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:58.305139   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:58.352587   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:58.352617   49443 cri.go:89] found id: ""
	I0213 23:17:58.352628   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:58.352688   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.357493   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:58.357576   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:58.412181   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.412203   49443 cri.go:89] found id: ""
	I0213 23:17:58.412211   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:58.412265   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.418852   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:58.418931   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:58.470881   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.470907   49443 cri.go:89] found id: ""
	I0213 23:17:58.470916   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:58.470970   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.476768   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:58.476851   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:58.548272   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:58.548293   49443 cri.go:89] found id: ""
	I0213 23:17:58.548301   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:58.548357   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.553380   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:58.553452   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:58.599623   49443 cri.go:89] found id: ""
	I0213 23:17:58.599652   49443 logs.go:276] 0 containers: []
	W0213 23:17:58.599663   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:58.599669   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:58.599725   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:58.647872   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.647896   49443 cri.go:89] found id: ""
	I0213 23:17:58.647906   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:58.647966   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.653015   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:58.653041   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.707958   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:58.708000   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.759975   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:58.760015   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.814801   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:58.814833   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.853782   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.853814   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:59.217806   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:59.217854   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:59.278255   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:59.278294   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:59.385496   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:59.385537   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:59.953729   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:02.454016   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.740739   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:57.740774   49120 cri.go:89] found id: ""
	I0213 23:17:57.740785   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:57.740839   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.745140   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:57.745163   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:57.758556   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:57.758604   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:57.900468   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:57.900507   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.945665   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:57.945693   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:58.003484   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:58.003521   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:58.048797   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:58.048826   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.096309   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:58.096347   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:58.173795   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.173990   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.196277   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:58.196306   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:58.266087   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:58.266129   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:58.325638   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:58.325676   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:58.372711   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:58.372752   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:58.444057   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.444097   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:58.830470   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830511   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:58.830572   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:58.830591   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.830600   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.830610   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830618   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:59.544056   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:59.544517   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:59.607033   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:59.607067   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:59.654534   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:59.654584   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:59.719274   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:59.719309   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:02.234489   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:18:02.240412   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:18:02.241675   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:02.241699   49443 api_server.go:131] duration metric: took 4.046700263s to wait for apiserver health ...
	I0213 23:18:02.241710   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:02.241735   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:02.241796   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:02.289133   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:02.289158   49443 cri.go:89] found id: ""
	I0213 23:18:02.289166   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:18:02.289212   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.295450   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:02.295527   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:02.342262   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:02.342285   49443 cri.go:89] found id: ""
	I0213 23:18:02.342292   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:18:02.342337   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.346810   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:02.346874   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:02.385638   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:02.385665   49443 cri.go:89] found id: ""
	I0213 23:18:02.385673   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:18:02.385725   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.389834   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:02.389920   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:02.435078   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:02.435110   49443 cri.go:89] found id: ""
	I0213 23:18:02.435121   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:18:02.435184   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.440237   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:02.440297   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:02.483869   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.483891   49443 cri.go:89] found id: ""
	I0213 23:18:02.483899   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:18:02.483942   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.490454   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:02.490532   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:02.540971   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:02.541000   49443 cri.go:89] found id: ""
	I0213 23:18:02.541010   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:18:02.541069   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.545818   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:02.545906   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:02.593132   49443 cri.go:89] found id: ""
	I0213 23:18:02.593159   49443 logs.go:276] 0 containers: []
	W0213 23:18:02.593166   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:02.593172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:02.593222   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:02.634979   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.635015   49443 cri.go:89] found id: ""
	I0213 23:18:02.635028   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:18:02.635089   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.640246   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:18:02.640274   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.681426   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:18:02.681458   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.721033   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:02.721062   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:03.049340   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:03.049385   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:18:03.154378   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:18:03.154417   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:03.215045   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:18:03.215081   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:03.260291   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:18:03.260320   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:03.323526   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:18:03.323565   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:03.378686   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:03.378731   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:03.406717   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:03.406742   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:03.547999   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:18:03.548035   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:03.593226   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:18:03.593255   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:06.160914   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:06.160954   49443 system_pods.go:61] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.160963   49443 system_pods.go:61] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.160970   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.160977   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.160996   49443 system_pods.go:61] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.161008   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.161018   49443 system_pods.go:61] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.161025   49443 system_pods.go:61] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.161035   49443 system_pods.go:74] duration metric: took 3.919318115s to wait for pod list to return data ...
	I0213 23:18:06.161046   49443 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:06.165231   49443 default_sa.go:45] found service account: "default"
	I0213 23:18:06.165262   49443 default_sa.go:55] duration metric: took 4.207834ms for default service account to be created ...
	I0213 23:18:06.165271   49443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:06.172453   49443 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:06.172488   49443 system_pods.go:89] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.172494   49443 system_pods.go:89] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.172499   49443 system_pods.go:89] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.172503   49443 system_pods.go:89] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.172507   49443 system_pods.go:89] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.172512   49443 system_pods.go:89] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.172517   49443 system_pods.go:89] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.172522   49443 system_pods.go:89] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.172531   49443 system_pods.go:126] duration metric: took 7.254871ms to wait for k8s-apps to be running ...
	I0213 23:18:06.172541   49443 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:06.172598   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:06.193026   49443 system_svc.go:56] duration metric: took 20.479072ms WaitForService to wait for kubelet.
	I0213 23:18:06.193051   49443 kubeadm.go:581] duration metric: took 4m16.886913912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:06.193072   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:06.196910   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:06.196940   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:06.196951   49443 node_conditions.go:105] duration metric: took 3.874223ms to run NodePressure ...
	I0213 23:18:06.196962   49443 start.go:228] waiting for startup goroutines ...
	I0213 23:18:06.196968   49443 start.go:233] waiting for cluster config update ...
	I0213 23:18:06.196977   49443 start.go:242] writing updated cluster config ...
	I0213 23:18:06.197233   49443 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:06.248295   49443 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:06.250392   49443 out.go:177] * Done! kubectl is now configured to use "embed-certs-340656" cluster and "default" namespace by default
	I0213 23:18:04.455358   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:06.953191   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.954115   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:10.954853   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.832437   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:18:08.838687   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:18:08.839999   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:18:08.840021   49120 api_server.go:131] duration metric: took 11.488768389s to wait for apiserver health ...
	I0213 23:18:08.840031   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:08.840058   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:08.840122   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:08.891532   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:08.891559   49120 cri.go:89] found id: ""
	I0213 23:18:08.891567   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:18:08.891618   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.896712   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:08.896802   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:08.943555   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:08.943584   49120 cri.go:89] found id: ""
	I0213 23:18:08.943593   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:18:08.943654   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.948658   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:08.948730   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:08.995867   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:08.995896   49120 cri.go:89] found id: ""
	I0213 23:18:08.995905   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:18:08.995970   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.000810   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:09.000883   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:09.046606   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.046636   49120 cri.go:89] found id: ""
	I0213 23:18:09.046646   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:18:09.046706   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.050924   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:09.050986   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:09.097414   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.097445   49120 cri.go:89] found id: ""
	I0213 23:18:09.097456   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:18:09.097525   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.102101   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:09.102177   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:09.164244   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.164267   49120 cri.go:89] found id: ""
	I0213 23:18:09.164274   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:18:09.164323   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.169164   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:09.169238   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:09.217068   49120 cri.go:89] found id: ""
	I0213 23:18:09.217094   49120 logs.go:276] 0 containers: []
	W0213 23:18:09.217101   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:09.217106   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:09.217174   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:09.256986   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.257017   49120 cri.go:89] found id: ""
	I0213 23:18:09.257028   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:18:09.257088   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.261602   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:18:09.261625   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.314910   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:18:09.314957   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.361576   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:18:09.361609   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.433243   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:18:09.433281   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.485648   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:09.485698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:09.634091   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:18:09.634127   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:09.681649   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:18:09.681689   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:09.729410   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:09.729449   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:10.100058   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:18:10.100104   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:10.156178   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:10.156209   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:10.229188   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.229358   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.251947   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:10.251987   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:10.268224   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:18:10.268251   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:10.319580   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319608   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:10.319651   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:18:10.319663   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.319673   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.319685   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319696   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:13.453597   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:15.445609   49715 pod_ready.go:81] duration metric: took 4m0.000451749s waiting for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	E0213 23:18:15.445643   49715 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:18:15.445653   49715 pod_ready.go:38] duration metric: took 4m2.428270702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:18:15.445670   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:18:15.445716   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:15.445773   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:15.501757   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:15.501791   49715 cri.go:89] found id: ""
	I0213 23:18:15.501802   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:15.501863   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.507658   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:15.507738   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:15.552164   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:15.552197   49715 cri.go:89] found id: ""
	I0213 23:18:15.552204   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:15.552257   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.557704   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:15.557764   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:15.606147   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:15.606168   49715 cri.go:89] found id: ""
	I0213 23:18:15.606175   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:15.606231   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.610863   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:15.610939   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:15.655298   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:15.655320   49715 cri.go:89] found id: ""
	I0213 23:18:15.655329   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:15.655387   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.660000   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:15.660062   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:15.699700   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:15.699735   49715 cri.go:89] found id: ""
	I0213 23:18:15.699745   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:15.699815   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.704535   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:15.704614   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:15.746999   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:15.747028   49715 cri.go:89] found id: ""
	I0213 23:18:15.747038   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:15.747091   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.752065   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:15.752137   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:15.793372   49715 cri.go:89] found id: ""
	I0213 23:18:15.793404   49715 logs.go:276] 0 containers: []
	W0213 23:18:15.793415   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:15.793422   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:15.793487   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:15.839630   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:15.839660   49715 cri.go:89] found id: ""
	I0213 23:18:15.839668   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:15.839723   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.844199   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:15.844225   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:15.904450   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:15.904479   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:15.925777   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:15.925805   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:16.079602   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:16.079634   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:16.121369   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:16.121400   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:16.174404   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:16.174440   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:16.216286   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:16.216321   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:16.629527   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:16.629564   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:16.708003   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.708235   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.729748   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:16.729784   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:16.784398   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:16.784432   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:16.829885   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:16.829923   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:16.872036   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:16.872066   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:16.937327   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937359   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:16.937411   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:16.937421   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.937431   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.937441   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937449   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:20.329462   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:20.329500   49120 system_pods.go:61] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.329508   49120 system_pods.go:61] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.329515   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.329521   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.329527   49120 system_pods.go:61] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.329533   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.329543   49120 system_pods.go:61] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.329550   49120 system_pods.go:61] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.329560   49120 system_pods.go:74] duration metric: took 11.489522059s to wait for pod list to return data ...
	I0213 23:18:20.329569   49120 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:20.332784   49120 default_sa.go:45] found service account: "default"
	I0213 23:18:20.332809   49120 default_sa.go:55] duration metric: took 3.233136ms for default service account to be created ...
	I0213 23:18:20.332817   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:20.339002   49120 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:20.339033   49120 system_pods.go:89] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.339042   49120 system_pods.go:89] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.339049   49120 system_pods.go:89] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.339056   49120 system_pods.go:89] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.339063   49120 system_pods.go:89] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.339070   49120 system_pods.go:89] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.339084   49120 system_pods.go:89] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.339093   49120 system_pods.go:89] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.339116   49120 system_pods.go:126] duration metric: took 6.292649ms to wait for k8s-apps to be running ...
	I0213 23:18:20.339125   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:20.339183   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:20.354459   49120 system_svc.go:56] duration metric: took 15.325743ms WaitForService to wait for kubelet.
	I0213 23:18:20.354488   49120 kubeadm.go:581] duration metric: took 4m38.510005999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:20.354505   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:20.358160   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:20.358186   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:20.358195   49120 node_conditions.go:105] duration metric: took 3.685402ms to run NodePressure ...
	I0213 23:18:20.358205   49120 start.go:228] waiting for startup goroutines ...
	I0213 23:18:20.358211   49120 start.go:233] waiting for cluster config update ...
	I0213 23:18:20.358220   49120 start.go:242] writing updated cluster config ...
	I0213 23:18:20.358527   49120 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:20.409811   49120 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 23:18:20.412251   49120 out.go:177] * Done! kubectl is now configured to use "no-preload-778731" cluster and "default" namespace by default
	I0213 23:18:26.939087   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:18:26.956231   49715 api_server.go:72] duration metric: took 4m16.268553955s to wait for apiserver process to appear ...
	I0213 23:18:26.956259   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:18:26.956317   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:26.956382   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:27.006428   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.006455   49715 cri.go:89] found id: ""
	I0213 23:18:27.006465   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:27.006527   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.011468   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:27.011542   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:27.054309   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.054334   49715 cri.go:89] found id: ""
	I0213 23:18:27.054344   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:27.054393   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.058925   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:27.058979   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:27.101942   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.101971   49715 cri.go:89] found id: ""
	I0213 23:18:27.101981   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:27.102041   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.107540   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:27.107609   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:27.152126   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.152150   49715 cri.go:89] found id: ""
	I0213 23:18:27.152157   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:27.152203   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.156537   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:27.156608   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:27.202931   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:27.202952   49715 cri.go:89] found id: ""
	I0213 23:18:27.202959   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:27.203006   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.209339   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:27.209405   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:27.250771   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:27.250814   49715 cri.go:89] found id: ""
	I0213 23:18:27.250828   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:27.250898   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.255547   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:27.255621   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:27.297645   49715 cri.go:89] found id: ""
	I0213 23:18:27.297679   49715 logs.go:276] 0 containers: []
	W0213 23:18:27.297689   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:27.297697   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:27.297765   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:27.340690   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.340719   49715 cri.go:89] found id: ""
	I0213 23:18:27.340728   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:27.340786   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.345308   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:27.345338   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:27.481620   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:27.481653   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.541421   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:27.541456   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.594527   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:27.594559   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.657323   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:27.657358   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.710198   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:27.710234   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.750419   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:27.750451   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:28.148333   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:28.148374   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:28.162927   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:28.162959   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:28.214802   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:28.214835   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:28.264035   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:28.264061   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:28.328849   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:28.328888   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:28.408683   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.408859   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429691   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429721   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:28.429772   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:28.429780   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.429787   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429793   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429798   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:38.431065   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:18:38.438496   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:18:38.440109   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:38.440131   49715 api_server.go:131] duration metric: took 11.483865303s to wait for apiserver health ...
	I0213 23:18:38.440139   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:38.440163   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:38.440218   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:38.485767   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:38.485791   49715 cri.go:89] found id: ""
	I0213 23:18:38.485798   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:38.485847   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.490804   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:38.490876   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:38.540174   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:38.540196   49715 cri.go:89] found id: ""
	I0213 23:18:38.540203   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:38.540247   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.545816   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:38.545904   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:38.593443   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:38.593466   49715 cri.go:89] found id: ""
	I0213 23:18:38.593474   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:38.593531   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.598567   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:38.598642   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:38.646508   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:38.646539   49715 cri.go:89] found id: ""
	I0213 23:18:38.646549   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:38.646605   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.651425   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:38.651489   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:38.695133   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:38.695157   49715 cri.go:89] found id: ""
	I0213 23:18:38.695166   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:38.695226   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.700446   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:38.700504   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:38.748214   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.748251   49715 cri.go:89] found id: ""
	I0213 23:18:38.748261   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:38.748319   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.753466   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:38.753532   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:38.796480   49715 cri.go:89] found id: ""
	I0213 23:18:38.796505   49715 logs.go:276] 0 containers: []
	W0213 23:18:38.796514   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:38.796521   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:38.796597   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:38.838145   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.838189   49715 cri.go:89] found id: ""
	I0213 23:18:38.838199   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:38.838259   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.844252   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:38.844279   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.919402   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:38.919442   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.963733   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:38.963767   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:39.013301   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:39.013336   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:39.142161   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:39.142192   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:39.199423   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:39.199455   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:39.245639   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:39.245669   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:39.290916   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:39.290954   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:39.343373   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:39.343405   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:39.700393   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:39.700441   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:39.777386   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.777564   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.800035   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:39.800087   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:39.817941   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:39.817972   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:39.870635   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870675   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:39.870733   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:39.870744   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.870749   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.870756   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870764   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:49.878184   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:49.878220   49715 system_pods.go:61] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.878229   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.878237   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.878244   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.878250   49715 system_pods.go:61] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.878256   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.878268   49715 system_pods.go:61] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.878276   49715 system_pods.go:61] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.878284   49715 system_pods.go:74] duration metric: took 11.438139039s to wait for pod list to return data ...
	I0213 23:18:49.878294   49715 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:49.881702   49715 default_sa.go:45] found service account: "default"
	I0213 23:18:49.881730   49715 default_sa.go:55] duration metric: took 3.42943ms for default service account to be created ...
	I0213 23:18:49.881741   49715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:49.888356   49715 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:49.888380   49715 system_pods.go:89] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.888385   49715 system_pods.go:89] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.888392   49715 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.888397   49715 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.888403   49715 system_pods.go:89] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.888409   49715 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.888422   49715 system_pods.go:89] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.888434   49715 system_pods.go:89] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.888446   49715 system_pods.go:126] duration metric: took 6.698139ms to wait for k8s-apps to be running ...
	I0213 23:18:49.888456   49715 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:49.888497   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:49.905396   49715 system_svc.go:56] duration metric: took 16.928016ms WaitForService to wait for kubelet.
	I0213 23:18:49.905427   49715 kubeadm.go:581] duration metric: took 4m39.217754888s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:49.905452   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:49.909261   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:49.909296   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:49.909312   49715 node_conditions.go:105] duration metric: took 3.854435ms to run NodePressure ...
	I0213 23:18:49.909326   49715 start.go:228] waiting for startup goroutines ...
	I0213 23:18:49.909334   49715 start.go:233] waiting for cluster config update ...
	I0213 23:18:49.909347   49715 start.go:242] writing updated cluster config ...
	I0213 23:18:49.909654   49715 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:49.961022   49715 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:49.963033   49715 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-083863" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:09:02 UTC, ends at Tue 2024-02-13 23:19:27 UTC. --
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.275796290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d7620a88-f3c9-47ad-ad82-d45b90e74c6b name=/runtime.v1.RuntimeService/Version
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.277145709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=36d071dc-192f-404c-beae-4e5e5f90710d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.277670096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866367277654632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=36d071dc-192f-404c-beae-4e5e5f90710d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.278213858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5661c9dc-c96c-459e-af02-a6bb5bb5ca55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.278292336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5661c9dc-c96c-459e-af02-a6bb5bb5ca55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.278518106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5661c9dc-c96c-459e-af02-a6bb5bb5ca55 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.319326935Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=5ae60141-4712-40f5-9fa4-0da923ba0905 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.319693718Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:26b5895d7f20369070e02667628d132fae63e8e7762dec2204b45e2356711ca7,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-c6rp6,Uid:cfb3f364-5eee-45a0-bd22-88d1efaefee3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865799731344681,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-c6rp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb3f364-5eee-45a0-bd22-88d1efaefee3,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:09:59.394769991Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&PodSandboxMetadata{Name:busybox,Uid:c64fb331-f46d-44fb-a6fe-cc7e421d13ee,Namespace
:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865786340937962,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:09:42.485709925Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-kr6t9,Uid:0c060820-1e79-4e3e-92d8-ec77f75741c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865786326727570,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:09
:42.48571133Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e3977149-1877-4180-b568-72c5ae81788f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865784935627212,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k
8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-13T23:09:42.48570845Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&PodSandboxMetadata{Name:kube-proxy-nj7qx,Uid:4efb1b13-7f14-49bd-aacf-600b7733cbe0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865784635198270,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-600b7733cbe0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.
io/config.seen: 2024-02-13T23:09:42.485705485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-245122,Uid:a18b4e74ab253fe005b68903242f6bc8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865775293788836,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a18b4e74ab253fe005b68903242f6bc8,kubernetes.io/config.seen: 2024-02-13T23:09:34.270819023Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-245122,Uid:b39706a67360d65bfa
3cf2560791efe9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865775119606676,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b39706a67360d65bfa3cf2560791efe9,kubernetes.io/config.seen: 2024-02-13T23:09:34.270807232Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-245122,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865775104324104,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-02-13T23:09:34.270814897Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-245122,Uid:fd95658e6d145feff7b098e46f743938,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707865775092802759,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fd95658e6d145feff7b098e46f743938,kubernetes.io/config.seen: 2024-02-13T23:09:34.270817071Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="
go-grpc-middleware/chain.go:25" id=5ae60141-4712-40f5-9fa4-0da923ba0905 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.320248994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed923afc-a0b4-4915-beb5-a10db499d8a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.320322630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed923afc-a0b4-4915-beb5-a10db499d8a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.320627230Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed923afc-a0b4-4915-beb5-a10db499d8a6 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.323891295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3fefbbaf-e3d8-42af-967f-6906e16bfb54 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.323966825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3fefbbaf-e3d8-42af-967f-6906e16bfb54 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.325341088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8d4534e4-d645-42c7-9b8b-4369592ab9d6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.325949325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866367325928651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8d4534e4-d645-42c7-9b8b-4369592ab9d6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.326614458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5e417c07-b234-467b-8b5f-74a440185fd1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.326667939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5e417c07-b234-467b-8b5f-74a440185fd1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.326847264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5e417c07-b234-467b-8b5f-74a440185fd1 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.367973955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cb251229-ee95-4998-8176-276160f80326 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.368041963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cb251229-ee95-4998-8176-276160f80326 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.369130996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b8399eef-b751-41d7-9f17-410cb6d059d5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.369502401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866367369490588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b8399eef-b751-41d7-9f17-410cb6d059d5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.370197511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f8f9515e-3e48-4f01-bc21-f12f3045dfb8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.370242000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f8f9515e-3e48-4f01-bc21-f12f3045dfb8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:19:27 old-k8s-version-245122 crio[715]: time="2024-02-13 23:19:27.370423992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f8f9515e-3e48-4f01-bc21-f12f3045dfb8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab470e6a37deb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Running             storage-provisioner       1                   0ee1de177ef1f       storage-provisioner
	6f9549484d35e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   1ea1776a6fd35       busybox
	2cabfb623c7fb       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   dff35a34c018d       coredns-5644d7b6d9-kr6t9
	9609117f701bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   0ee1de177ef1f       storage-provisioner
	f43c15c3d3903       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   6985f34c8dfeb       kube-proxy-nj7qx
	5926aa9fbfac6       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   3cbb9b1b585e8       etcd-old-k8s-version-245122
	2ec1e75ab6923       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   c235794c2618d       kube-apiserver-old-k8s-version-245122
	1626274a7b38f       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   39d88fda12f10       kube-scheduler-old-k8s-version-245122
	b4b01d14f2ef4       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   fbee32e09e8bd       kube-controller-manager-old-k8s-version-245122
	
	
	==> coredns [2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19] <==
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	2024-02-13T23:09:51.877Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-02-13T23:09:51.890Z [INFO] 127.0.0.1:49025 - 59187 "HINFO IN 5388163579779728481.5269519262384264271. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013163185s
	2024-02-13T23:09:53.712Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2024-02-13T23:10:03.712Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2024-02-13T23:10:13.712Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I0213 23:10:16.877224       1 trace.go:82] Trace[1240964328]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-02-13 23:09:46.876636146 +0000 UTC m=+0.045899897) (total time: 30.000549497s):
	Trace[1240964328]: [30.000549497s] [30.000549497s] END
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0213 23:10:16.877797       1 trace.go:82] Trace[85575035]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-02-13 23:09:46.877466157 +0000 UTC m=+0.046729882) (total time: 30.000283389s):
	Trace[85575035]: [30.000283389s] [30.000283389s] END
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0213 23:10:16.877994       1 trace.go:82] Trace[26344488]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-02-13 23:09:46.877258222 +0000 UTC m=+0.046521949) (total time: 30.000718418s):
	Trace[26344488]: [30.000718418s] [30.000718418s] END
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-245122
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-245122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=old-k8s-version-245122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T22_58_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:58:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:19:13 +0000   Tue, 13 Feb 2024 22:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:19:13 +0000   Tue, 13 Feb 2024 22:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:19:13 +0000   Tue, 13 Feb 2024 22:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:19:13 +0000   Tue, 13 Feb 2024 23:09:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    old-k8s-version-245122
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 3817d3973781432fa9a183fb2b2072e7
	 System UUID:                3817d397-3781-432f-a9a1-83fb2b2072e7
	 Boot ID:                    76248c73-daaa-4ecd-ab96-a014cd915ca9
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                coredns-5644d7b6d9-kr6t9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                etcd-old-k8s-version-245122                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-apiserver-old-k8s-version-245122             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-controller-manager-old-k8s-version-245122    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                kube-proxy-nj7qx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                kube-scheduler-old-k8s-version-245122             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                metrics-server-74d5856cc6-c6rp6                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m28s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                    kube-proxy, old-k8s-version-245122  Starting kube-proxy.
	  Normal  Starting                 9m53s                  kubelet, old-k8s-version-245122     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m53s (x8 over 9m53s)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s (x8 over 9m53s)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s (x7 over 9m53s)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet, old-k8s-version-245122     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m41s                  kube-proxy, old-k8s-version-245122  Starting kube-proxy.
	
	
	==> dmesg <==
	[Feb13 23:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.084539] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.181843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb13 23:09] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160862] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.563977] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.608895] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.132022] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.183865] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.126986] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.286935] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +19.008648] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[  +0.484880] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.367555] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d] <==
	2024-02-13 23:09:37.381840 I | etcdserver: election = 1000ms
	2024-02-13 23:09:37.381843 I | etcdserver: snapshot count = 10000
	2024-02-13 23:09:37.381850 I | etcdserver: advertise client URLs = https://192.168.50.36:2379
	2024-02-13 23:09:37.385337 I | etcdserver: restarting member e5487579cc149d4d in cluster 31bd1a1c1ff06930 at commit index 533
	2024-02-13 23:09:37.385465 I | raft: e5487579cc149d4d became follower at term 2
	2024-02-13 23:09:37.385506 I | raft: newRaft e5487579cc149d4d [peers: [], term: 2, commit: 533, applied: 0, lastindex: 533, lastterm: 2]
	2024-02-13 23:09:37.394993 W | auth: simple token is not cryptographically signed
	2024-02-13 23:09:37.398280 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-02-13 23:09:37.399815 I | etcdserver/membership: added member e5487579cc149d4d [https://192.168.50.36:2380] to cluster 31bd1a1c1ff06930
	2024-02-13 23:09:37.399995 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-02-13 23:09:37.400084 I | etcdserver/api: enabled capabilities for version 3.3
	2024-02-13 23:09:37.404413 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-13 23:09:37.404709 I | embed: listening for metrics on http://192.168.50.36:2381
	2024-02-13 23:09:37.405098 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-02-13 23:09:39.186202 I | raft: e5487579cc149d4d is starting a new election at term 2
	2024-02-13 23:09:39.186297 I | raft: e5487579cc149d4d became candidate at term 3
	2024-02-13 23:09:39.186322 I | raft: e5487579cc149d4d received MsgVoteResp from e5487579cc149d4d at term 3
	2024-02-13 23:09:39.186348 I | raft: e5487579cc149d4d became leader at term 3
	2024-02-13 23:09:39.186365 I | raft: raft.node: e5487579cc149d4d elected leader e5487579cc149d4d at term 3
	2024-02-13 23:09:39.186861 I | etcdserver: published {Name:old-k8s-version-245122 ClientURLs:[https://192.168.50.36:2379]} to cluster 31bd1a1c1ff06930
	2024-02-13 23:09:39.187069 I | embed: ready to serve client requests
	2024-02-13 23:09:39.187499 I | embed: ready to serve client requests
	2024-02-13 23:09:39.189081 I | embed: serving client requests on 192.168.50.36:2379
	2024-02-13 23:09:39.190074 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-13 23:09:45.898892 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2026" took too long (109.346107ms) to execute
	
	
	==> kernel <==
	 23:19:27 up 10 min,  0 users,  load average: 0.12, 0.22, 0.18
	Linux old-k8s-version-245122 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022] <==
	I0213 23:10:44.244378       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:10:44.244469       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:10:44.244508       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:10:44.244518       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:12:44.244801       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:12:44.244915       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:12:44.244972       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:12:44.244979       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:14:43.545282       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:14:43.545397       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:14:43.545448       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:14:43.545456       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:15:43.545912       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:15:43.546004       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:15:43.546057       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:15:43.546072       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:17:43.546512       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:17:43.546840       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:17:43.546895       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:17:43.546902       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15] <==
	E0213 23:13:01.394312       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:13:11.904349       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:13:31.647237       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:13:43.906860       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:14:01.899418       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:14:15.909736       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:14:32.151850       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:14:47.911973       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:15:02.404683       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:15:19.914387       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:15:32.656979       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:15:51.917190       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:16:02.909233       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:16:23.919327       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:16:33.161885       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:16:55.921330       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:17:03.414278       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:17:27.924896       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:17:33.666116       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:17:59.927161       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:18:03.918298       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:18:31.929274       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:18:34.170686       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:19:03.931501       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:19:04.422927       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b] <==
	W0213 22:59:15.934244       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0213 22:59:15.965583       1 node.go:135] Successfully retrieved node IP: 192.168.50.36
	I0213 22:59:15.965692       1 server_others.go:149] Using iptables Proxier.
	I0213 22:59:15.975303       1 server.go:529] Version: v1.16.0
	I0213 22:59:15.982847       1 config.go:313] Starting service config controller
	I0213 22:59:15.983698       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0213 22:59:15.982982       1 config.go:131] Starting endpoints config controller
	I0213 22:59:15.984963       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0213 22:59:16.084279       1 shared_informer.go:204] Caches are synced for service config 
	I0213 22:59:16.088403       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0213 23:09:46.222169       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0213 23:09:46.231887       1 node.go:135] Successfully retrieved node IP: 192.168.50.36
	I0213 23:09:46.231943       1 server_others.go:149] Using iptables Proxier.
	I0213 23:09:46.233169       1 server.go:529] Version: v1.16.0
	I0213 23:09:46.234957       1 config.go:313] Starting service config controller
	I0213 23:09:46.235037       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0213 23:09:46.236737       1 config.go:131] Starting endpoints config controller
	I0213 23:09:46.236795       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0213 23:09:46.337457       1 shared_informer.go:204] Caches are synced for service config 
	I0213 23:09:46.337869       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960] <==
	E0213 22:58:53.777631       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:58:54.752094       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:58:54.759113       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:58:54.769148       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 22:58:54.770103       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:58:54.771879       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:58:54.771956       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 22:58:54.773539       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 22:58:54.779005       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 22:58:54.782515       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:58:54.786984       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 22:58:54.792125       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:59:14.246254       1 factory.go:585] pod is already present in the activeQ
	E0213 22:59:14.270938       1 factory.go:585] pod is already present in the activeQ
	I0213 23:09:36.978473       1 serving.go:319] Generated self-signed cert in-memory
	W0213 23:09:42.494730       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0213 23:09:42.494980       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:09:42.495282       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 23:09:42.498329       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 23:09:42.531608       1 server.go:143] Version: v1.16.0
	I0213 23:09:42.531754       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0213 23:09:42.533913       1 authorization.go:47] Authorization is disabled
	W0213 23:09:42.533959       1 authentication.go:79] Authentication is disabled
	I0213 23:09:42.533973       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0213 23:09:42.534468       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:09:02 UTC, ends at Tue 2024-02-13 23:19:27 UTC. --
	Feb 13 23:14:34 old-k8s-version-245122 kubelet[1023]: E0213 23:14:34.364275    1023 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Feb 13 23:14:47 old-k8s-version-245122 kubelet[1023]: E0213 23:14:47.285830    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:15:02 old-k8s-version-245122 kubelet[1023]: E0213 23:15:02.286789    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:15:17 old-k8s-version-245122 kubelet[1023]: E0213 23:15:17.286675    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:15:31 old-k8s-version-245122 kubelet[1023]: E0213 23:15:31.287988    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:15:46 old-k8s-version-245122 kubelet[1023]: E0213 23:15:46.307989    1023 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:15:46 old-k8s-version-245122 kubelet[1023]: E0213 23:15:46.308135    1023 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:15:46 old-k8s-version-245122 kubelet[1023]: E0213 23:15:46.308193    1023 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:15:46 old-k8s-version-245122 kubelet[1023]: E0213 23:15:46.308224    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 13 23:16:00 old-k8s-version-245122 kubelet[1023]: E0213 23:16:00.286655    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:16:13 old-k8s-version-245122 kubelet[1023]: E0213 23:16:13.285389    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:16:28 old-k8s-version-245122 kubelet[1023]: E0213 23:16:28.285773    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:16:43 old-k8s-version-245122 kubelet[1023]: E0213 23:16:43.285091    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:16:57 old-k8s-version-245122 kubelet[1023]: E0213 23:16:57.285082    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:17:09 old-k8s-version-245122 kubelet[1023]: E0213 23:17:09.285998    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:17:24 old-k8s-version-245122 kubelet[1023]: E0213 23:17:24.286727    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:17:38 old-k8s-version-245122 kubelet[1023]: E0213 23:17:38.286100    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:17:51 old-k8s-version-245122 kubelet[1023]: E0213 23:17:51.285975    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:18:05 old-k8s-version-245122 kubelet[1023]: E0213 23:18:05.285498    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:18:17 old-k8s-version-245122 kubelet[1023]: E0213 23:18:17.285741    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:18:31 old-k8s-version-245122 kubelet[1023]: E0213 23:18:31.286036    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:18:46 old-k8s-version-245122 kubelet[1023]: E0213 23:18:46.285354    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:18:57 old-k8s-version-245122 kubelet[1023]: E0213 23:18:57.285865    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:19:08 old-k8s-version-245122 kubelet[1023]: E0213 23:19:08.285709    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:19:22 old-k8s-version-245122 kubelet[1023]: E0213 23:19:22.285412    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1] <==
	I0213 22:59:17.311750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:59:17.322294       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:59:17.322561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:59:17.340576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:59:17.340969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_678c404a-331b-49da-b95a-0e8b1412d720!
	I0213 22:59:17.347786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cace87c9-89a0-466f-97f9-38c9b9e6c48b", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245122_678c404a-331b-49da-b95a-0e8b1412d720 became leader
	I0213 22:59:17.445007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_678c404a-331b-49da-b95a-0e8b1412d720!
	I0213 23:09:46.905463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0213 23:10:16.907840       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249] <==
	I0213 23:10:17.724786       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:10:17.733589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:10:17.733811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:10:35.149958       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:10:35.151160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_e59d8682-98ba-4c99-88c8-67b46d0ef0c1!
	I0213 23:10:35.152596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cace87c9-89a0-466f-97f9-38c9b9e6c48b", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245122_e59d8682-98ba-4c99-88c8-67b46d0ef0c1 became leader
	I0213 23:10:35.251789       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_e59d8682-98ba-4c99-88c8-67b46d0ef0c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-245122 -n old-k8s-version-245122
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-245122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-c6rp6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-245122 describe pod metrics-server-74d5856cc6-c6rp6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-245122 describe pod metrics-server-74d5856cc6-c6rp6: exit status 1 (70.542642ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-c6rp6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-245122 describe pod metrics-server-74d5856cc6-c6rp6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-340656 -n embed-certs-340656
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:27:06.872744905 +0000 UTC m=+5446.487518818
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-340656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-340656 logs -n 25: (1.873117152s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 23:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-245122        | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:05:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:05:02.640377   49715 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:05:02.640501   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640509   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:05:02.640513   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640736   49715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:05:02.641321   49715 out.go:298] Setting JSON to false
	I0213 23:05:02.642273   49715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6454,"bootTime":1707859049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:05:02.642347   49715 start.go:138] virtualization: kvm guest
	I0213 23:05:02.645098   49715 out.go:177] * [default-k8s-diff-port-083863] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:05:02.646964   49715 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:05:02.646970   49715 notify.go:220] Checking for updates...
	I0213 23:05:02.648511   49715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:05:02.650105   49715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:05:02.651715   49715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:05:02.653359   49715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:05:02.655095   49715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:05:02.657048   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:05:02.657426   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.657495   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.672324   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0213 23:05:02.672730   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.673260   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.673290   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.673647   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.673817   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.674096   49715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:05:02.674432   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.674472   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.688915   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0213 23:05:02.689349   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.689790   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.689816   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.690223   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.690421   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.727324   49715 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 23:05:02.728797   49715 start.go:298] selected driver: kvm2
	I0213 23:05:02.728815   49715 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.728927   49715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:05:02.729600   49715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.729674   49715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:05:02.745692   49715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:05:02.746106   49715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:05:02.746172   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:05:02.746187   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:05:02.746199   49715 start_flags.go:321] config:
	{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-08386
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.746779   49715 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.748860   49715 out.go:177] * Starting control plane node default-k8s-diff-port-083863 in cluster default-k8s-diff-port-083863
	I0213 23:05:02.750290   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:05:02.750326   49715 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:05:02.750333   49715 cache.go:56] Caching tarball of preloaded images
	I0213 23:05:02.750421   49715 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:05:02.750463   49715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:05:02.750576   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:05:02.750762   49715 start.go:365] acquiring machines lock for default-k8s-diff-port-083863: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:05:07.158187   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:10.230150   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:16.310133   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:19.382235   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:25.462139   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:28.534229   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:34.614137   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:37.686165   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:43.766206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:46.838168   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:52.918134   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:55.990211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:02.070192   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:05.142167   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:11.222152   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:14.294088   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:20.374194   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:23.446217   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:29.526175   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:32.598147   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:38.678146   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:41.750169   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:47.830142   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:50.902206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:56.982180   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:00.054195   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:06.134182   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:09.206215   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:15.286248   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:18.358211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:24.438162   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:27.510191   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:33.590177   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:36.662174   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:42.742237   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:45.814203   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:48.818472   49120 start.go:369] acquired machines lock for "no-preload-778731" in 4m31.005837415s
	I0213 23:07:48.818529   49120 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:07:48.818538   49120 fix.go:54] fixHost starting: 
	I0213 23:07:48.818916   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:07:48.818948   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:07:48.833483   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 23:07:48.833925   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:07:48.834425   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:07:48.834452   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:07:48.834778   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:07:48.835000   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:07:48.835155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:07:48.836889   49120 fix.go:102] recreateIfNeeded on no-preload-778731: state=Stopped err=<nil>
	I0213 23:07:48.836930   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	W0213 23:07:48.837148   49120 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:07:48.840033   49120 out.go:177] * Restarting existing kvm2 VM for "no-preload-778731" ...
	I0213 23:07:48.816416   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:07:48.816456   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:07:48.818324   49036 machine.go:91] provisioned docker machine in 4m37.408860809s
	I0213 23:07:48.818361   49036 fix.go:56] fixHost completed within 4m37.431023423s
	I0213 23:07:48.818366   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 4m37.431037395s
	W0213 23:07:48.818389   49036 start.go:694] error starting host: provision: host is not running
	W0213 23:07:48.818527   49036 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0213 23:07:48.818541   49036 start.go:709] Will try again in 5 seconds ...
	I0213 23:07:48.841324   49120 main.go:141] libmachine: (no-preload-778731) Calling .Start
	I0213 23:07:48.841532   49120 main.go:141] libmachine: (no-preload-778731) Ensuring networks are active...
	I0213 23:07:48.842327   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network default is active
	I0213 23:07:48.842678   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network mk-no-preload-778731 is active
	I0213 23:07:48.843032   49120 main.go:141] libmachine: (no-preload-778731) Getting domain xml...
	I0213 23:07:48.843852   49120 main.go:141] libmachine: (no-preload-778731) Creating domain...
	I0213 23:07:50.042665   49120 main.go:141] libmachine: (no-preload-778731) Waiting to get IP...
	I0213 23:07:50.043679   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.044091   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.044189   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.044069   50144 retry.go:31] will retry after 251.949505ms: waiting for machine to come up
	I0213 23:07:50.297817   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.298535   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.298567   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.298493   50144 retry.go:31] will retry after 319.494876ms: waiting for machine to come up
	I0213 23:07:50.620050   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.620443   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.620468   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.620395   50144 retry.go:31] will retry after 308.031117ms: waiting for machine to come up
	I0213 23:07:50.929942   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.930361   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.930391   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.930309   50144 retry.go:31] will retry after 513.800078ms: waiting for machine to come up
	I0213 23:07:51.446223   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:51.446875   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:51.446904   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:51.446813   50144 retry.go:31] will retry after 592.80917ms: waiting for machine to come up
	I0213 23:07:52.042126   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.042542   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.042573   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.042519   50144 retry.go:31] will retry after 688.102963ms: waiting for machine to come up
	I0213 23:07:53.818751   49036 start.go:365] acquiring machines lock for old-k8s-version-245122: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:07:52.732194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.732576   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.732602   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.732538   50144 retry.go:31] will retry after 1.143041451s: waiting for machine to come up
	I0213 23:07:53.877287   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:53.877661   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:53.877687   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:53.877624   50144 retry.go:31] will retry after 918.528315ms: waiting for machine to come up
	I0213 23:07:54.797760   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:54.798287   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:54.798314   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:54.798252   50144 retry.go:31] will retry after 1.679404533s: waiting for machine to come up
	I0213 23:07:56.479283   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:56.479853   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:56.479880   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:56.479785   50144 retry.go:31] will retry after 1.510596076s: waiting for machine to come up
	I0213 23:07:57.992757   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:57.993320   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:57.993352   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:57.993274   50144 retry.go:31] will retry after 2.041602638s: waiting for machine to come up
	I0213 23:08:00.036654   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:00.037130   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:00.037162   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:00.037075   50144 retry.go:31] will retry after 3.403460211s: waiting for machine to come up
	I0213 23:08:03.444689   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:03.445152   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:03.445176   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:03.445088   50144 retry.go:31] will retry after 4.270182412s: waiting for machine to come up
	I0213 23:08:09.107106   49443 start.go:369] acquired machines lock for "embed-certs-340656" in 3m54.456203319s
	I0213 23:08:09.107175   49443 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:09.107194   49443 fix.go:54] fixHost starting: 
	I0213 23:08:09.107647   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:09.107696   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:09.124314   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0213 23:08:09.124675   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:09.125131   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:08:09.125153   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:09.125509   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:09.125705   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:09.125898   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:08:09.127641   49443 fix.go:102] recreateIfNeeded on embed-certs-340656: state=Stopped err=<nil>
	I0213 23:08:09.127661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	W0213 23:08:09.127830   49443 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:09.130334   49443 out.go:177] * Restarting existing kvm2 VM for "embed-certs-340656" ...
	I0213 23:08:09.132354   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Start
	I0213 23:08:09.132546   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring networks are active...
	I0213 23:08:09.133391   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network default is active
	I0213 23:08:09.133758   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network mk-embed-certs-340656 is active
	I0213 23:08:09.134160   49443 main.go:141] libmachine: (embed-certs-340656) Getting domain xml...
	I0213 23:08:09.134954   49443 main.go:141] libmachine: (embed-certs-340656) Creating domain...
	I0213 23:08:07.719971   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.720520   49120 main.go:141] libmachine: (no-preload-778731) Found IP for machine: 192.168.83.31
	I0213 23:08:07.720541   49120 main.go:141] libmachine: (no-preload-778731) Reserving static IP address...
	I0213 23:08:07.720559   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has current primary IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.721043   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.721071   49120 main.go:141] libmachine: (no-preload-778731) DBG | skip adding static IP to network mk-no-preload-778731 - found existing host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"}
	I0213 23:08:07.721086   49120 main.go:141] libmachine: (no-preload-778731) Reserved static IP address: 192.168.83.31
	I0213 23:08:07.721105   49120 main.go:141] libmachine: (no-preload-778731) DBG | Getting to WaitForSSH function...
	I0213 23:08:07.721120   49120 main.go:141] libmachine: (no-preload-778731) Waiting for SSH to be available...
	I0213 23:08:07.723769   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724343   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.724370   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724485   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH client type: external
	I0213 23:08:07.724515   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa (-rw-------)
	I0213 23:08:07.724552   49120 main.go:141] libmachine: (no-preload-778731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:07.724577   49120 main.go:141] libmachine: (no-preload-778731) DBG | About to run SSH command:
	I0213 23:08:07.724605   49120 main.go:141] libmachine: (no-preload-778731) DBG | exit 0
	I0213 23:08:07.823050   49120 main.go:141] libmachine: (no-preload-778731) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:07.823504   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetConfigRaw
	I0213 23:08:07.824155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:07.826730   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827237   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.827277   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827608   49120 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/config.json ...
	I0213 23:08:07.827851   49120 machine.go:88] provisioning docker machine ...
	I0213 23:08:07.827877   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:07.828112   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828416   49120 buildroot.go:166] provisioning hostname "no-preload-778731"
	I0213 23:08:07.828464   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828745   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.832015   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832438   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.832477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832698   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.832929   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833125   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833288   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.833480   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.833828   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.833845   49120 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-778731 && echo "no-preload-778731" | sudo tee /etc/hostname
	I0213 23:08:07.979041   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-778731
	
	I0213 23:08:07.979079   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.982378   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982755   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.982783   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982982   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.983137   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983346   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983462   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.983600   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.983946   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.983967   49120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778731/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:08.122610   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:08.122641   49120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:08.122657   49120 buildroot.go:174] setting up certificates
	I0213 23:08:08.122666   49120 provision.go:83] configureAuth start
	I0213 23:08:08.122674   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:08.122935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:08.125641   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126016   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.126046   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126205   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.128441   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128736   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.128780   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128918   49120 provision.go:138] copyHostCerts
	I0213 23:08:08.128984   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:08.128997   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:08.129067   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:08.129198   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:08.129211   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:08.129248   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:08.129321   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:08.129335   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:08.129373   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:08.129443   49120 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.no-preload-778731 san=[192.168.83.31 192.168.83.31 localhost 127.0.0.1 minikube no-preload-778731]
	I0213 23:08:08.326156   49120 provision.go:172] copyRemoteCerts
	I0213 23:08:08.326234   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:08.326263   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.329373   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.329952   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.329986   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.330257   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.330447   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.330599   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.330737   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.423570   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:08.447689   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:08.472766   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:08:08.496594   49120 provision.go:86] duration metric: configureAuth took 373.917105ms
	I0213 23:08:08.496623   49120 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:08.496815   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:08:08.496899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.499464   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499771   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.499801   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.500116   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500284   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500459   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.500651   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.500962   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.500981   49120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:08.828899   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:08.828935   49120 machine.go:91] provisioned docker machine in 1.001067662s
	I0213 23:08:08.828948   49120 start.go:300] post-start starting for "no-preload-778731" (driver="kvm2")
	I0213 23:08:08.828966   49120 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:08.828987   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:08.829378   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:08.829401   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.831985   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832340   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.832365   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832498   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.832717   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.832873   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.833022   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.930192   49120 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:08.934633   49120 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:08.934660   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:08.934723   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:08.934804   49120 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:08.934893   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:08.945400   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:08.973850   49120 start.go:303] post-start completed in 144.888108ms
	I0213 23:08:08.973894   49120 fix.go:56] fixHost completed within 20.155355472s
	I0213 23:08:08.973917   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.976477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976799   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.976831   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976990   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.977177   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977358   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977513   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.977664   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.978069   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.978082   49120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:09.106952   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865689.053803664
	
	I0213 23:08:09.106977   49120 fix.go:206] guest clock: 1707865689.053803664
	I0213 23:08:09.106984   49120 fix.go:219] Guest: 2024-02-13 23:08:09.053803664 +0000 UTC Remote: 2024-02-13 23:08:08.973898202 +0000 UTC m=+291.312557253 (delta=79.905462ms)
	I0213 23:08:09.107004   49120 fix.go:190] guest clock delta is within tolerance: 79.905462ms
	I0213 23:08:09.107011   49120 start.go:83] releasing machines lock for "no-preload-778731", held for 20.288505954s
	I0213 23:08:09.107046   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.107372   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:09.110226   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110592   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.110623   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110795   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111368   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111531   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111622   49120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:09.111662   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.113712   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.114053   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.114096   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.117964   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.118031   49120 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:09.118065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.118167   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.118318   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.118615   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.120610   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121054   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.121088   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121290   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.121461   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.121627   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.121770   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.234065   49120 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:09.240751   49120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:09.393966   49120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:09.401672   49120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:09.401767   49120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:09.426073   49120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:09.426099   49120 start.go:475] detecting cgroup driver to use...
	I0213 23:08:09.426172   49120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:09.446114   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:09.461330   49120 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:09.461404   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:09.475964   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:09.490801   49120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:09.621898   49120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:09.747413   49120 docker.go:233] disabling docker service ...
	I0213 23:08:09.747470   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:09.766642   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:09.783116   49120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:09.910634   49120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:10.052181   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:10.066413   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:10.089436   49120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:10.089505   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.100366   49120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:10.100453   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.111681   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.122231   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.132945   49120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:10.146287   49120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:10.156405   49120 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:10.156481   49120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:10.172152   49120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:10.182862   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:10.315633   49120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:10.509774   49120 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:10.509878   49120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:10.514924   49120 start.go:543] Will wait 60s for crictl version
	I0213 23:08:10.515016   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.518898   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:10.558596   49120 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:10.558695   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.611876   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.664604   49120 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:08:10.665908   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:10.669029   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669393   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:10.669442   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669676   49120 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:10.673975   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:10.686760   49120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:08:10.686830   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:10.730784   49120 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:08:10.730813   49120 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:08:10.730900   49120 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.730903   49120 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.730909   49120 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.730914   49120 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.731026   49120 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.731083   49120 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.731131   49120 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.731497   49120 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732506   49120 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.732511   49120 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.732513   49120 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.732543   49120 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732577   49120 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.732597   49120 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.732719   49120 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.732759   49120 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.880038   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.891830   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0213 23:08:10.905668   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.930079   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.940850   49120 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0213 23:08:10.940894   49120 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.940941   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.942664   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.985299   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.011467   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.040720   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.099497   49120 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0213 23:08:11.099544   49120 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0213 23:08:11.099577   49120 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.099614   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:11.099636   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099651   49120 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0213 23:08:11.099683   49120 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.099711   49120 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0213 23:08:11.099740   49120 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.099746   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099760   49120 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0213 23:08:11.099767   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099782   49120 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.099547   49120 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.099901   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099916   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.107567   49120 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0213 23:08:11.107614   49120 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.107675   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.119038   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.157701   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.157799   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.157722   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.157768   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.157830   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.157919   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0213 23:08:11.158002   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.200990   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0213 23:08:11.201117   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:11.299985   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.300039   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 23:08:11.300041   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300130   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:11.300137   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300148   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0213 23:08:11.300163   49120 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300198   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300209   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300216   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0213 23:08:11.300203   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300098   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300293   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300096   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.318252   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0213 23:08:11.318307   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318355   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318520   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0213 23:08:10.406170   49443 main.go:141] libmachine: (embed-certs-340656) Waiting to get IP...
	I0213 23:08:10.407139   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.407616   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.407692   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.407598   50262 retry.go:31] will retry after 193.299479ms: waiting for machine to come up
	I0213 23:08:10.603143   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.603673   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.603696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.603627   50262 retry.go:31] will retry after 369.099644ms: waiting for machine to come up
	I0213 23:08:10.974421   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.974922   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.974953   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.974870   50262 retry.go:31] will retry after 418.956642ms: waiting for machine to come up
	I0213 23:08:11.395489   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:11.395974   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:11.396005   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:11.395937   50262 retry.go:31] will retry after 610.320769ms: waiting for machine to come up
	I0213 23:08:12.007695   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.008167   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.008198   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.008115   50262 retry.go:31] will retry after 624.461953ms: waiting for machine to come up
	I0213 23:08:12.634088   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.634519   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.634552   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.634467   50262 retry.go:31] will retry after 903.217503ms: waiting for machine to come up
	I0213 23:08:13.539114   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:13.539683   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:13.539725   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:13.539611   50262 retry.go:31] will retry after 747.647967ms: waiting for machine to come up
	I0213 23:08:14.288632   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:14.289301   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:14.289338   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:14.289236   50262 retry.go:31] will retry after 1.415080779s: waiting for machine to come up
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.810648669s)
	I0213 23:08:15.110937   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.810587707s)
	I0213 23:08:15.110961   49120 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:15.110969   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0213 23:08:15.111009   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:17.178104   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067071549s)
	I0213 23:08:17.178130   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0213 23:08:17.178156   49120 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:17.178204   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:15.706329   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:15.706863   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:15.706901   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:15.706769   50262 retry.go:31] will retry after 1.500671136s: waiting for machine to come up
	I0213 23:08:17.209706   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:17.210252   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:17.210278   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:17.210198   50262 retry.go:31] will retry after 1.743342291s: waiting for machine to come up
	I0213 23:08:18.956397   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:18.956934   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:18.956971   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:18.956874   50262 retry.go:31] will retry after 2.095777111s: waiting for machine to come up
	I0213 23:08:18.227625   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.049388261s)
	I0213 23:08:18.227663   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 23:08:18.227691   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:18.227756   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:21.120783   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.892997016s)
	I0213 23:08:21.120823   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0213 23:08:21.120854   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.120908   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.055630   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:21.056028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:21.056106   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:21.056004   50262 retry.go:31] will retry after 3.144708692s: waiting for machine to come up
	I0213 23:08:24.202158   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:24.202562   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:24.202584   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:24.202515   50262 retry.go:31] will retry after 3.072407019s: waiting for machine to come up
	I0213 23:08:23.793772   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.672817599s)
	I0213 23:08:23.793813   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0213 23:08:23.793841   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:23.793916   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:25.866352   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.072399119s)
	I0213 23:08:25.866388   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0213 23:08:25.866422   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:25.866469   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:27.315469   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.44897051s)
	I0213 23:08:27.315502   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0213 23:08:27.315534   49120 cache_images.go:123] Successfully loaded all cached images
	I0213 23:08:27.315540   49120 cache_images.go:92] LoadImages completed in 16.584715329s
	I0213 23:08:27.315650   49120 ssh_runner.go:195] Run: crio config
	I0213 23:08:27.383180   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:27.383203   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:27.383224   49120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:27.383249   49120 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.31 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778731 NodeName:no-preload-778731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:27.383445   49120 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778731"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:27.383545   49120 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-778731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:27.383606   49120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:08:27.393312   49120 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:27.393384   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:27.401513   49120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0213 23:08:27.419705   49120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:08:27.439236   49120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0213 23:08:27.457026   49120 ssh_runner.go:195] Run: grep 192.168.83.31	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:27.461679   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:27.474701   49120 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731 for IP: 192.168.83.31
	I0213 23:08:27.474740   49120 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:27.474922   49120 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:27.474966   49120 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:27.475042   49120 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.key
	I0213 23:08:27.475102   49120 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key.049c2370
	I0213 23:08:27.475138   49120 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key
	I0213 23:08:27.475241   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:27.475271   49120 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:27.475281   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:27.475305   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:27.475326   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:27.475360   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:27.475401   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:27.475997   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:27.500212   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:27.526078   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:27.552892   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:27.579169   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:27.603962   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:27.628862   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:27.653046   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:27.681039   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:27.708026   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:28.658782   49715 start.go:369] acquired machines lock for "default-k8s-diff-port-083863" in 3m25.907988779s
	I0213 23:08:28.658844   49715 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:28.658851   49715 fix.go:54] fixHost starting: 
	I0213 23:08:28.659235   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:28.659276   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:28.677314   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0213 23:08:28.677718   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:28.678315   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:08:28.678355   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:28.678727   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:28.678935   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:28.679109   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:08:28.680868   49715 fix.go:102] recreateIfNeeded on default-k8s-diff-port-083863: state=Stopped err=<nil>
	I0213 23:08:28.680915   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	W0213 23:08:28.681100   49715 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:28.683083   49715 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-083863" ...
	I0213 23:08:27.278610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279033   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has current primary IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279068   49443 main.go:141] libmachine: (embed-certs-340656) Found IP for machine: 192.168.61.56
	I0213 23:08:27.279085   49443 main.go:141] libmachine: (embed-certs-340656) Reserving static IP address...
	I0213 23:08:27.279524   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.279553   49443 main.go:141] libmachine: (embed-certs-340656) Reserved static IP address: 192.168.61.56
	I0213 23:08:27.279572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | skip adding static IP to network mk-embed-certs-340656 - found existing host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"}
	I0213 23:08:27.279592   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Getting to WaitForSSH function...
	I0213 23:08:27.279609   49443 main.go:141] libmachine: (embed-certs-340656) Waiting for SSH to be available...
	I0213 23:08:27.282041   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282383   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.282417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282517   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH client type: external
	I0213 23:08:27.282548   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa (-rw-------)
	I0213 23:08:27.282582   49443 main.go:141] libmachine: (embed-certs-340656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:27.282598   49443 main.go:141] libmachine: (embed-certs-340656) DBG | About to run SSH command:
	I0213 23:08:27.282688   49443 main.go:141] libmachine: (embed-certs-340656) DBG | exit 0
	I0213 23:08:27.374230   49443 main.go:141] libmachine: (embed-certs-340656) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:27.374589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetConfigRaw
	I0213 23:08:27.375330   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.378273   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378648   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.378682   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378917   49443 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/config.json ...
	I0213 23:08:27.379092   49443 machine.go:88] provisioning docker machine ...
	I0213 23:08:27.379109   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:27.379298   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379491   49443 buildroot.go:166] provisioning hostname "embed-certs-340656"
	I0213 23:08:27.379521   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379667   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.382028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382351   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.382404   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382562   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.382728   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.382880   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.383023   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.383213   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.383662   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.383682   49443 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-340656 && echo "embed-certs-340656" | sudo tee /etc/hostname
	I0213 23:08:27.526044   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-340656
	
	I0213 23:08:27.526075   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.529185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529526   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.529556   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529660   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.529852   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530056   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530203   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.530356   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.530695   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.530725   49443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-340656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-340656/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-340656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:27.664926   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:27.664966   49443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:27.664993   49443 buildroot.go:174] setting up certificates
	I0213 23:08:27.665004   49443 provision.go:83] configureAuth start
	I0213 23:08:27.665019   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.665429   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.668520   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.668912   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.668937   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.669172   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.671996   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672365   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.672411   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672620   49443 provision.go:138] copyHostCerts
	I0213 23:08:27.672684   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:27.672706   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:27.672778   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:27.672914   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:27.672929   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:27.672966   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:27.673049   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:27.673060   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:27.673089   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:27.673187   49443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.embed-certs-340656 san=[192.168.61.56 192.168.61.56 localhost 127.0.0.1 minikube embed-certs-340656]
	I0213 23:08:27.924954   49443 provision.go:172] copyRemoteCerts
	I0213 23:08:27.925011   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:27.925033   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.928037   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928376   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.928410   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928588   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.928779   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.928960   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.929085   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.019335   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:28.043949   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0213 23:08:28.066824   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:08:28.089010   49443 provision.go:86] duration metric: configureAuth took 423.986916ms
	I0213 23:08:28.089043   49443 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:28.089251   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:28.089316   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.091655   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.091955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.091984   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.092151   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.092310   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092440   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092553   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.092694   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.092999   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.093014   49443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:28.402931   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:28.402953   49443 machine.go:91] provisioned docker machine in 1.023849221s
	I0213 23:08:28.402962   49443 start.go:300] post-start starting for "embed-certs-340656" (driver="kvm2")
	I0213 23:08:28.402972   49443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:28.402986   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.403246   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:28.403266   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.405815   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.406201   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406331   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.406514   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.406703   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.406867   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.500638   49443 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:28.504820   49443 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:28.504839   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:28.504899   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:28.504967   49443 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:28.505051   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:28.514593   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:28.536607   49443 start.go:303] post-start completed in 133.632311ms
	I0213 23:08:28.536653   49443 fix.go:56] fixHost completed within 19.429451259s
	I0213 23:08:28.536673   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.539355   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539715   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.539739   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539914   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.540115   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540275   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540420   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.540581   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.540917   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.540932   49443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:28.658649   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865708.631208852
	
	I0213 23:08:28.658674   49443 fix.go:206] guest clock: 1707865708.631208852
	I0213 23:08:28.658682   49443 fix.go:219] Guest: 2024-02-13 23:08:28.631208852 +0000 UTC Remote: 2024-02-13 23:08:28.536657964 +0000 UTC m=+254.042699377 (delta=94.550888ms)
	I0213 23:08:28.658701   49443 fix.go:190] guest clock delta is within tolerance: 94.550888ms
	I0213 23:08:28.658707   49443 start.go:83] releasing machines lock for "embed-certs-340656", held for 19.551560323s
	I0213 23:08:28.658730   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.658982   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:28.662069   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662449   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.662480   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662651   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663245   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663430   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663521   49443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:28.663567   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.663688   49443 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:28.663712   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.666417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666867   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.666900   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667039   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.667185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667234   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667418   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667467   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667518   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.667589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667736   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.782794   49443 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:28.788743   49443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:28.933478   49443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:28.940543   49443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:28.940632   49443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:28.958972   49443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:28.958994   49443 start.go:475] detecting cgroup driver to use...
	I0213 23:08:28.959084   49443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:28.977833   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:28.996142   49443 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:28.996205   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:29.015509   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:29.029839   49443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:29.140405   49443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:29.265524   49443 docker.go:233] disabling docker service ...
	I0213 23:08:29.265597   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:29.283479   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:29.300116   49443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:29.428731   49443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:29.555072   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:29.569803   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:29.589259   49443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:29.589329   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.600653   49443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:29.600732   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.612313   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.624637   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.636279   49443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:29.648496   49443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:29.658957   49443 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:29.659020   49443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:29.673605   49443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:29.684589   49443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:29.800899   49443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:29.989345   49443 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:29.989423   49443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:29.995420   49443 start.go:543] Will wait 60s for crictl version
	I0213 23:08:29.995489   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:08:30.000012   49443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:30.047026   49443 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:30.047114   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.095456   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.146027   49443 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:28.684576   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Start
	I0213 23:08:28.684757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring networks are active...
	I0213 23:08:28.685582   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network default is active
	I0213 23:08:28.685942   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network mk-default-k8s-diff-port-083863 is active
	I0213 23:08:28.686429   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Getting domain xml...
	I0213 23:08:28.687208   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Creating domain...
	I0213 23:08:30.003148   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting to get IP...
	I0213 23:08:30.004175   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004634   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004725   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.004599   50394 retry.go:31] will retry after 210.109414ms: waiting for machine to come up
	I0213 23:08:30.215983   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216407   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216439   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.216359   50394 retry.go:31] will retry after 367.743906ms: waiting for machine to come up
	I0213 23:08:30.586081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586629   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586663   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.586583   50394 retry.go:31] will retry after 342.736609ms: waiting for machine to come up
	I0213 23:08:30.931248   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931707   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931738   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.931656   50394 retry.go:31] will retry after 597.326691ms: waiting for machine to come up
	I0213 23:08:31.530395   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530818   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530848   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:31.530767   50394 retry.go:31] will retry after 749.518323ms: waiting for machine to come up
	I0213 23:08:32.281688   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282102   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282138   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:32.282052   50394 retry.go:31] will retry after 760.722423ms: waiting for machine to come up
	I0213 23:08:27.731687   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:27.755515   49120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:27.774677   49120 ssh_runner.go:195] Run: openssl version
	I0213 23:08:27.780042   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:27.789684   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794384   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794443   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.800052   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:27.809570   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:27.818781   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823148   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823241   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.829043   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:27.839290   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:27.849614   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854661   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854720   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.860365   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:27.870548   49120 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:27.874967   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:27.880745   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:27.886409   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:27.892063   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:27.897857   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:27.903804   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:27.909720   49120 kubeadm.go:404] StartCluster: {Name:no-preload-778731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:27.909833   49120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:27.909924   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:27.951061   49120 cri.go:89] found id: ""
	I0213 23:08:27.951158   49120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:27.961916   49120 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:27.961941   49120 kubeadm.go:636] restartCluster start
	I0213 23:08:27.961993   49120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:27.971549   49120 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:27.972633   49120 kubeconfig.go:92] found "no-preload-778731" server: "https://192.168.83.31:8443"
	I0213 23:08:27.975092   49120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:27.983592   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:27.983650   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:27.993448   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.483988   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.484086   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.499804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.984581   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.984671   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.995887   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.484572   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.484680   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.496906   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.984503   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.984569   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.997813   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.484312   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.484391   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.501606   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.984144   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.984237   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.999418   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.483900   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.483977   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.498536   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.983688   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.983783   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.998804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:32.484556   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.484662   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:32.499238   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.147474   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:30.150438   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.150826   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:30.150857   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.151054   49443 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:30.155517   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:30.168463   49443 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:30.168543   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:30.210212   49443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:30.210296   49443 ssh_runner.go:195] Run: which lz4
	I0213 23:08:30.214665   49443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:30.219355   49443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:30.219383   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:32.244671   49443 crio.go:444] Took 2.030037 seconds to copy over tarball
	I0213 23:08:32.244757   49443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:33.043974   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044478   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:33.044417   50394 retry.go:31] will retry after 1.030870704s: waiting for machine to come up
	I0213 23:08:34.077209   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077662   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077692   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:34.077625   50394 retry.go:31] will retry after 1.450536952s: waiting for machine to come up
	I0213 23:08:35.529659   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530101   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530135   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:35.530042   50394 retry.go:31] will retry after 1.82898716s: waiting for machine to come up
	I0213 23:08:37.360889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361314   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361343   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:37.361270   50394 retry.go:31] will retry after 1.83473409s: waiting for machine to come up
	I0213 23:08:32.984096   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.984203   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.001189   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.483705   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.499694   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.983927   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.984057   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.999205   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.483708   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.483798   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.498840   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.984372   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.984461   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.999079   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.483661   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.497573   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.983985   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.984088   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.995899   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.484546   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.484660   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.496286   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.983902   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.984113   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.995778   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.484405   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.484518   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.495219   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.549721   49443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.304931423s)
	I0213 23:08:35.549748   49443 crio.go:451] Took 3.305051 seconds to extract the tarball
	I0213 23:08:35.549778   49443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:35.590195   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:35.640735   49443 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:35.640768   49443 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:35.640850   49443 ssh_runner.go:195] Run: crio config
	I0213 23:08:35.707018   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:35.707046   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:35.707072   49443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:35.707117   49443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.56 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-340656 NodeName:embed-certs-340656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:35.707294   49443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-340656"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:35.707405   49443 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-340656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:35.707483   49443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:35.717170   49443 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:35.717251   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:35.726586   49443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0213 23:08:35.744139   49443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:35.761480   49443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0213 23:08:35.779911   49443 ssh_runner.go:195] Run: grep 192.168.61.56	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:35.784152   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:35.799376   49443 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656 for IP: 192.168.61.56
	I0213 23:08:35.799417   49443 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:35.799601   49443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:35.799657   49443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:35.799766   49443 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/client.key
	I0213 23:08:35.799859   49443 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key.aef5f426
	I0213 23:08:35.799913   49443 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key
	I0213 23:08:35.800053   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:35.800091   49443 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:35.800107   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:35.800143   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:35.800180   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:35.800215   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:35.800276   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:35.801130   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:35.829983   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:35.856832   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:35.883713   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:35.910759   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:35.937208   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:35.963904   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:35.991562   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:36.022900   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:36.049084   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:36.074152   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:36.098863   49443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:36.115588   49443 ssh_runner.go:195] Run: openssl version
	I0213 23:08:36.120864   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:36.130552   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.134999   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.135068   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.140621   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:36.150963   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:36.160917   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165428   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165472   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.171493   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:36.181635   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:36.191753   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196368   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196444   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.201985   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:36.211839   49443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:36.216608   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:36.222594   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:36.228585   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:36.234646   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:36.240579   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:36.246642   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:36.252961   49443 kubeadm.go:404] StartCluster: {Name:embed-certs-340656 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:36.253087   49443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:36.253149   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:36.297601   49443 cri.go:89] found id: ""
	I0213 23:08:36.297705   49443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:36.308068   49443 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:36.308094   49443 kubeadm.go:636] restartCluster start
	I0213 23:08:36.308152   49443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:36.318071   49443 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.319274   49443 kubeconfig.go:92] found "embed-certs-340656" server: "https://192.168.61.56:8443"
	I0213 23:08:36.321573   49443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:36.331006   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.331059   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.342313   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.831994   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.832106   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.845071   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.331654   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.331724   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.344311   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.831903   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.831999   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.843671   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.331225   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.331337   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.349021   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.831196   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.831292   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.847050   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.332089   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.332162   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.348108   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.198188   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198570   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198596   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:39.198528   50394 retry.go:31] will retry after 2.722095348s: waiting for machine to come up
	I0213 23:08:41.923545   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923954   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923985   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:41.923904   50394 retry.go:31] will retry after 2.239772531s: waiting for machine to come up
	I0213 23:08:37.984640   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.984743   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.999300   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.999332   49120 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:37.999340   49120 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:37.999349   49120 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:37.999402   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:38.046199   49120 cri.go:89] found id: ""
	I0213 23:08:38.046287   49120 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:38.061697   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:38.071295   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:38.071378   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080401   49120 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:38.209853   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.403696   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193792627s)
	I0213 23:08:39.403733   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.602387   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.703317   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.783257   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:39.783347   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.284357   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.784437   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.284302   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.783582   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.284435   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.312653   49120 api_server.go:72] duration metric: took 2.529396171s to wait for apiserver process to appear ...
	I0213 23:08:42.312698   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:42.312719   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:42.313220   49120 api_server.go:269] stopped: https://192.168.83.31:8443/healthz: Get "https://192.168.83.31:8443/healthz": dial tcp 192.168.83.31:8443: connect: connection refused
	I0213 23:08:39.832020   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.832156   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.848229   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.331855   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.331992   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.347635   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.831070   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.831185   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.847184   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.331346   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.331444   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.346518   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.831081   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.831160   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.846752   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.331298   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.331389   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.348782   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.831278   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.831373   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.846241   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.331807   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.331876   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.346998   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.831697   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.831792   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.843733   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.331647   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.331762   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.343476   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.165021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165387   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:44.165357   50394 retry.go:31] will retry after 2.886798605s: waiting for machine to come up
	I0213 23:08:47.055186   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055880   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Found IP for machine: 192.168.39.3
	I0213 23:08:47.055923   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has current primary IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserving static IP address...
	I0213 23:08:47.056480   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.056512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserved static IP address: 192.168.39.3
	I0213 23:08:47.056537   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | skip adding static IP to network mk-default-k8s-diff-port-083863 - found existing host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"}
	I0213 23:08:47.056552   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Getting to WaitForSSH function...
	I0213 23:08:47.056567   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for SSH to be available...
	I0213 23:08:47.059414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059844   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.059882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059991   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH client type: external
	I0213 23:08:47.060025   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa (-rw-------)
	I0213 23:08:47.060061   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:47.060077   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | About to run SSH command:
	I0213 23:08:47.060093   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | exit 0
	I0213 23:08:47.154417   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:47.154807   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetConfigRaw
	I0213 23:08:47.155614   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.158506   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.158979   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.159005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.159297   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:08:47.159557   49715 machine.go:88] provisioning docker machine ...
	I0213 23:08:47.159577   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:47.159833   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160012   49715 buildroot.go:166] provisioning hostname "default-k8s-diff-port-083863"
	I0213 23:08:47.160038   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160240   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.163021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163444   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.163476   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163705   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.163908   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164070   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164234   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.164391   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.164762   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.164777   49715 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-083863 && echo "default-k8s-diff-port-083863" | sudo tee /etc/hostname
	I0213 23:08:47.304583   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-083863
	
	I0213 23:08:47.304617   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.307729   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308160   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.308196   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308345   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.308541   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308713   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308921   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.309148   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.309520   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.309539   49715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-083863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-083863/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-083863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:47.442924   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:47.442958   49715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:47.442989   49715 buildroot.go:174] setting up certificates
	I0213 23:08:47.443006   49715 provision.go:83] configureAuth start
	I0213 23:08:47.443024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.443287   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.446220   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446611   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.446646   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446821   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.449591   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.449920   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.449989   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.450162   49715 provision.go:138] copyHostCerts
	I0213 23:08:47.450221   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:47.450241   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:47.450305   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:47.450482   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:47.450497   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:47.450532   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:47.450614   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:47.450625   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:47.450651   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:47.450720   49715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-083863 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube default-k8s-diff-port-083863]
	I0213 23:08:47.522550   49715 provision.go:172] copyRemoteCerts
	I0213 23:08:47.522618   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:47.522647   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.525731   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526189   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.526230   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526410   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.526610   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.526814   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.526971   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:47.626666   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:42.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.095528   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:46.095564   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:46.095581   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.178470   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.178500   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.313729   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.318658   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.318686   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.813274   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.819766   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.819808   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.313432   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.325228   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:47.325274   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.819686   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:08:47.829842   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:08:47.829896   49120 api_server.go:131] duration metric: took 5.517189469s to wait for apiserver health ...
	I0213 23:08:47.829907   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:47.829915   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:47.831685   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:48.354933   49036 start.go:369] acquired machines lock for "old-k8s-version-245122" in 54.536117689s
	I0213 23:08:48.354988   49036 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:48.354996   49036 fix.go:54] fixHost starting: 
	I0213 23:08:48.355410   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:48.355447   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:48.375953   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0213 23:08:48.376414   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:48.376997   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:08:48.377034   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:48.377373   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:48.377578   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:08:48.377709   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:08:48.379630   49036 fix.go:102] recreateIfNeeded on old-k8s-version-245122: state=Stopped err=<nil>
	I0213 23:08:48.379660   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	W0213 23:08:48.379822   49036 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:48.381473   49036 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-245122" ...
	I0213 23:08:44.831390   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.831503   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.845068   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.331710   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.331800   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.343755   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.831306   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.831415   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.844972   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.331510   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:46.331596   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:46.343475   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.343509   49443 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:46.343520   49443 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:46.343532   49443 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:46.343595   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:46.388343   49443 cri.go:89] found id: ""
	I0213 23:08:46.388417   49443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:46.403792   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:46.413139   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:46.413197   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422541   49443 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422566   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:46.551204   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.427625   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.656205   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.776652   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.860844   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:47.860942   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.362058   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.861851   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:49.361973   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:47.655867   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 23:08:47.687226   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:47.719579   49715 provision.go:86] duration metric: configureAuth took 276.554247ms
	I0213 23:08:47.719610   49715 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:47.719857   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:47.719945   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.723023   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723353   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.723386   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723686   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.723889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724074   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724299   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.724469   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.724860   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.724878   49715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:48.093490   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:48.093519   49715 machine.go:91] provisioned docker machine in 933.948787ms
	I0213 23:08:48.093529   49715 start.go:300] post-start starting for "default-k8s-diff-port-083863" (driver="kvm2")
	I0213 23:08:48.093540   49715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:48.093553   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.093887   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:48.093922   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.096941   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097351   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.097385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097701   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.097936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.098145   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.098367   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.188626   49715 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:48.193282   49715 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:48.193320   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:48.193406   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:48.193500   49715 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:48.193597   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:48.202782   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:48.235000   49715 start.go:303] post-start completed in 141.454861ms
	I0213 23:08:48.235032   49715 fix.go:56] fixHost completed within 19.576181803s
	I0213 23:08:48.235051   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.238450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.238992   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.239024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.239320   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.239535   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239683   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239846   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.240085   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:48.240390   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:48.240401   49715 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:48.354769   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865728.300012904
	
	I0213 23:08:48.354799   49715 fix.go:206] guest clock: 1707865728.300012904
	I0213 23:08:48.354811   49715 fix.go:219] Guest: 2024-02-13 23:08:48.300012904 +0000 UTC Remote: 2024-02-13 23:08:48.235035663 +0000 UTC m=+225.644270499 (delta=64.977241ms)
	I0213 23:08:48.354837   49715 fix.go:190] guest clock delta is within tolerance: 64.977241ms
	I0213 23:08:48.354845   49715 start.go:83] releasing machines lock for "default-k8s-diff-port-083863", held for 19.696026805s
	I0213 23:08:48.354884   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.355246   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:48.358586   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359040   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.359081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359323   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.359961   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360127   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360200   49715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:48.360233   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.360372   49715 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:48.360398   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.363529   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.363715   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364166   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364357   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364394   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364461   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364656   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.364824   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370192   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.370221   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.370404   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370677   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.457230   49715 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:48.484954   49715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:48.636752   49715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:48.644369   49715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:48.644452   49715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:48.667562   49715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:48.667594   49715 start.go:475] detecting cgroup driver to use...
	I0213 23:08:48.667684   49715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:48.689737   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:48.708806   49715 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:48.708876   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:48.728530   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:48.746819   49715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:48.877519   49715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:49.069574   49715 docker.go:233] disabling docker service ...
	I0213 23:08:49.069661   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:49.103853   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:49.122356   49715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:49.272225   49715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:49.412111   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:49.428799   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:49.449679   49715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:49.449734   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.465458   49715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:49.465523   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.480399   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.494161   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.507964   49715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:49.522486   49715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:49.534468   49715 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:49.534538   49715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:49.554260   49715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:49.566868   49715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:49.725125   49715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:49.963096   49715 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:49.963172   49715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:49.970420   49715 start.go:543] Will wait 60s for crictl version
	I0213 23:08:49.970508   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:08:49.976177   49715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:50.024316   49715 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:50.024407   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.080031   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.133918   49715 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:48.382835   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Start
	I0213 23:08:48.383129   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring networks are active...
	I0213 23:08:48.384069   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network default is active
	I0213 23:08:48.384458   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network mk-old-k8s-version-245122 is active
	I0213 23:08:48.385051   49036 main.go:141] libmachine: (old-k8s-version-245122) Getting domain xml...
	I0213 23:08:48.387192   49036 main.go:141] libmachine: (old-k8s-version-245122) Creating domain...
	I0213 23:08:49.933195   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting to get IP...
	I0213 23:08:49.934463   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:49.935084   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:49.935109   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:49.934961   50565 retry.go:31] will retry after 206.578168ms: waiting for machine to come up
	I0213 23:08:50.143704   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.144239   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.144263   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.144177   50565 retry.go:31] will retry after 378.113433ms: waiting for machine to come up
	I0213 23:08:50.524043   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.524670   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.524703   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.524629   50565 retry.go:31] will retry after 468.261692ms: waiting for machine to come up
	I0213 23:08:50.995002   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.995616   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.995645   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.995524   50565 retry.go:31] will retry after 437.792222ms: waiting for machine to come up
	I0213 23:08:50.135427   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:50.139087   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139523   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:50.139556   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139840   49715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:50.145191   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:50.159814   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:50.159873   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:50.208873   49715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:50.208947   49715 ssh_runner.go:195] Run: which lz4
	I0213 23:08:50.214254   49715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:50.219979   49715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:50.220013   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:47.833116   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:47.862550   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:47.895377   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:47.919843   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:47.919894   49120 system_pods.go:61] "coredns-76f75df574-hgzcn" [a384c748-9d5b-4d07-b03c-5a65b3d7a450] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:47.919907   49120 system_pods.go:61] "etcd-no-preload-778731" [44169811-10f1-4d3e-8eaa-b525dd0f722f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:47.919920   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [126febb5-8d0b-4162-b320-7fd718b4a974] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:47.919929   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [a7be9641-1bd0-41f9-853a-73b522c60746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:47.919945   49120 system_pods.go:61] "kube-proxy-msxf7" [81201ce9-6f3d-457c-b582-eb8a17dbf4eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:47.919968   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [72f487c5-c42e-4e42-85c8-3b3df6bccd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:47.919984   49120 system_pods.go:61] "metrics-server-57f55c9bc5-r44rm" [ae0751b9-57fe-4d99-b41c-5c685b846e1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:47.919996   49120 system_pods.go:61] "storage-provisioner" [e1d157b3-7ce1-488c-a3ea-ab0e8da83fb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:47.920009   49120 system_pods.go:74] duration metric: took 24.606913ms to wait for pod list to return data ...
	I0213 23:08:47.920031   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:47.930765   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:47.930810   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:47.930827   49120 node_conditions.go:105] duration metric: took 10.783663ms to run NodePressure ...
	I0213 23:08:47.930848   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:48.401055   49120 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407167   49120 kubeadm.go:787] kubelet initialised
	I0213 23:08:48.407238   49120 kubeadm.go:788] duration metric: took 6.148946ms waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407260   49120 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:48.414170   49120 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:50.427883   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:52.431208   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:49.861114   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.361308   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.861249   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.894694   49443 api_server.go:72] duration metric: took 3.033850926s to wait for apiserver process to appear ...
	I0213 23:08:50.894724   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:50.894746   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:50.895231   49443 api_server.go:269] stopped: https://192.168.61.56:8443/healthz: Get "https://192.168.61.56:8443/healthz": dial tcp 192.168.61.56:8443: connect: connection refused
	I0213 23:08:51.394882   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:51.435131   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:51.435705   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:51.435733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:51.435616   50565 retry.go:31] will retry after 631.237829ms: waiting for machine to come up
	I0213 23:08:52.069120   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.069697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.069719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.069617   50565 retry.go:31] will retry after 756.691364ms: waiting for machine to come up
	I0213 23:08:52.828166   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.828631   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.828662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.828562   50565 retry.go:31] will retry after 761.909065ms: waiting for machine to come up
	I0213 23:08:53.592196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:53.592753   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:53.592779   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:53.592685   50565 retry.go:31] will retry after 1.153412106s: waiting for machine to come up
	I0213 23:08:54.747606   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:54.748184   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:54.748221   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:54.748113   50565 retry.go:31] will retry after 1.198347182s: waiting for machine to come up
	I0213 23:08:55.947978   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:55.948524   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:55.948545   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:55.948469   50565 retry.go:31] will retry after 2.116247229s: waiting for machine to come up
	I0213 23:08:52.713946   49715 crio.go:444] Took 2.499735 seconds to copy over tarball
	I0213 23:08:52.714030   49715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:56.483125   49715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.769061262s)
	I0213 23:08:56.483156   49715 crio.go:451] Took 3.769175 seconds to extract the tarball
	I0213 23:08:56.483167   49715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:56.524290   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:56.576319   49715 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:56.576349   49715 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:56.576435   49715 ssh_runner.go:195] Run: crio config
	I0213 23:08:56.633481   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:08:56.633514   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:56.633537   49715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:56.633561   49715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-083863 NodeName:default-k8s-diff-port-083863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:56.633744   49715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-083863"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:56.633838   49715 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-083863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0213 23:08:56.633930   49715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:56.643018   49715 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:56.643110   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:56.652116   49715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0213 23:08:56.670140   49715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:56.687456   49715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0213 23:08:56.707317   49715 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:56.711339   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:56.726090   49715 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863 for IP: 192.168.39.3
	I0213 23:08:56.726139   49715 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:56.726320   49715 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:56.726381   49715 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:56.726486   49715 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.key
	I0213 23:08:56.755690   49715 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key.599d509e
	I0213 23:08:56.755797   49715 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key
	I0213 23:08:56.755953   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:56.755996   49715 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:56.756008   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:56.756042   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:56.756072   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:56.756104   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:56.756157   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:56.756999   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:56.790072   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:56.821182   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:56.849753   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:56.875241   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:56.901057   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:56.929989   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:56.959488   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:56.991678   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:57.019756   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:57.047743   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:57.078812   49715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:57.097081   49715 ssh_runner.go:195] Run: openssl version
	I0213 23:08:57.103754   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:57.117364   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124069   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124160   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.132252   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:57.145398   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:57.158348   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164091   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164158   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.171693   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:57.185004   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:57.198410   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204432   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204495   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.210331   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:57.221567   49715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:57.226357   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:57.232307   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:57.239034   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:57.245485   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:57.252782   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:57.259406   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:57.265644   49715 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:57.265744   49715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:57.265820   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:57.313129   49715 cri.go:89] found id: ""
	I0213 23:08:57.313210   49715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:57.323716   49715 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:57.323747   49715 kubeadm.go:636] restartCluster start
	I0213 23:08:57.323837   49715 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:57.333805   49715 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.335100   49715 kubeconfig.go:92] found "default-k8s-diff-port-083863" server: "https://192.168.39.3:8444"
	I0213 23:08:57.337669   49715 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:57.347371   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.347434   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.359168   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:53.424206   49120 pod_ready.go:92] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:53.424235   49120 pod_ready.go:81] duration metric: took 5.01002772s waiting for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:53.424249   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:55.432858   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:54.636558   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.636595   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.636612   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.714679   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.714727   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.894910   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.909668   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:54.909716   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.395328   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.401124   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.401155   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.895827   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.901814   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.901848   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.395611   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.402367   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.402404   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.894889   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.900228   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.900267   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.394804   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.404774   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.404811   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.895090   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.902470   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.902527   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:58.395650   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:58.404727   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:08:58.413383   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:08:58.413425   49443 api_server.go:131] duration metric: took 7.518687282s to wait for apiserver health ...
	I0213 23:08:58.413437   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:58.413444   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:58.415682   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:58.417320   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:58.436763   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:58.468658   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:58.482719   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:58.482755   49443 system_pods.go:61] "coredns-5dd5756b68-h86p6" [9d274749-fe12-43c1-b30c-70586c04daf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:58.482762   49443 system_pods.go:61] "etcd-embed-certs-340656" [1fbdd834-b8c1-48c9-aab7-3c72d7012eca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:58.482770   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [3bb1cfb1-8fea-4b7a-a459-a709010ee6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:58.482783   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [f8035337-1819-4b0b-83eb-1992445c0185] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:58.482790   49443 system_pods.go:61] "kube-proxy-swxwt" [2bbc949c-f478-4c01-9e81-884a05a9a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:58.482795   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [923ef614-eef1-4e32-ae83-2e540841060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:58.482831   49443 system_pods.go:61] "metrics-server-57f55c9bc5-lmcwv" [a948cc5d-01b6-4298-a7c7-24d9704497d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:58.482846   49443 system_pods.go:61] "storage-provisioner" [9fc17bde-ff30-4ed7-829c-3d59badd55f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:58.482854   49443 system_pods.go:74] duration metric: took 14.17202ms to wait for pod list to return data ...
	I0213 23:08:58.482865   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:58.487666   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:58.487710   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:58.487723   49443 node_conditions.go:105] duration metric: took 4.851634ms to run NodePressure ...
	I0213 23:08:58.487743   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:59.044504   49443 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088347   49443 kubeadm.go:787] kubelet initialised
	I0213 23:08:59.088379   49443 kubeadm.go:788] duration metric: took 43.842389ms waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088390   49443 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:59.105292   49443 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.067162   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:58.067629   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:58.067662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:58.067589   50565 retry.go:31] will retry after 2.740013841s: waiting for machine to come up
	I0213 23:09:00.811129   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:00.811590   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:00.811623   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:00.811537   50565 retry.go:31] will retry after 3.449503247s: waiting for machine to come up
	I0213 23:08:57.848036   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.848128   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.863924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.348357   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.348539   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.364081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.848249   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.848321   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.860671   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.348282   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.348385   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.364226   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.847737   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.847838   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.864832   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.348231   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.348311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.360532   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.848115   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.848220   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.861558   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.348101   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.348192   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.360173   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.847696   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.847788   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.859631   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:02.348255   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.348353   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.363081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.943272   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:58.432531   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:58.432613   49120 pod_ready.go:81] duration metric: took 5.008354336s waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.432631   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:00.441099   49120 pod_ready.go:102] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:01.440207   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.440235   49120 pod_ready.go:81] duration metric: took 3.0075951s waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.440249   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446456   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.446483   49120 pod_ready.go:81] duration metric: took 6.224957ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446495   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452476   49120 pod_ready.go:92] pod "kube-proxy-msxf7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.452509   49120 pod_ready.go:81] duration metric: took 6.006176ms waiting for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452520   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457619   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.457640   49120 pod_ready.go:81] duration metric: took 5.112826ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457648   49120 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.113738   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:03.114003   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.262520   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:04.262989   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:04.263018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:04.262939   50565 retry.go:31] will retry after 3.540479459s: waiting for machine to come up
	I0213 23:09:02.847964   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.848073   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.863100   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.347510   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.347608   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.362561   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.847536   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.847635   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.863357   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.347939   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.348026   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.363027   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.847491   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.847576   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.858924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.347449   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.347527   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.359307   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.847845   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.847934   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.859530   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.348136   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.348231   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.360149   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.847699   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.847786   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.859859   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.347717   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:07.347806   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:07.360175   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.360211   49715 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:07.360223   49715 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:07.360234   49715 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:07.360304   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:07.400269   49715 cri.go:89] found id: ""
	I0213 23:09:07.400360   49715 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:07.416990   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:07.426513   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:07.426588   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436165   49715 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436197   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:07.602305   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:03.467176   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:05.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.614199   49443 pod_ready.go:92] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:04.614230   49443 pod_ready.go:81] duration metric: took 5.508903545s waiting for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:04.614244   49443 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:06.621198   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:08.622226   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:07.807018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:07.807577   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:07.807609   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:07.807519   50565 retry.go:31] will retry after 4.623412618s: waiting for machine to come up
	I0213 23:09:08.566096   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.757816   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.894570   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.984493   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:08.984609   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.485363   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.984792   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.485221   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.985649   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.485311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.516028   49715 api_server.go:72] duration metric: took 2.531534981s to wait for apiserver process to appear ...
	I0213 23:09:11.516054   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:11.516076   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:08.466006   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.965586   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.623965   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.623991   49443 pod_ready.go:81] duration metric: took 6.009738992s waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.624002   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631790   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.631813   49443 pod_ready.go:81] duration metric: took 7.802592ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631830   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638042   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.638065   49443 pod_ready.go:81] duration metric: took 6.226067ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638077   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645111   49443 pod_ready.go:92] pod "kube-proxy-swxwt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.645135   49443 pod_ready.go:81] duration metric: took 7.051124ms waiting for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645146   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651681   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.651703   49443 pod_ready.go:81] duration metric: took 6.550486ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651712   49443 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:12.659172   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:12.435133   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435720   49036 main.go:141] libmachine: (old-k8s-version-245122) Found IP for machine: 192.168.50.36
	I0213 23:09:12.435751   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has current primary IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435762   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserving static IP address...
	I0213 23:09:12.436196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.436241   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | skip adding static IP to network mk-old-k8s-version-245122 - found existing host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"}
	I0213 23:09:12.436262   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserved static IP address: 192.168.50.36
	I0213 23:09:12.436280   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting for SSH to be available...
	I0213 23:09:12.436296   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Getting to WaitForSSH function...
	I0213 23:09:12.438534   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.438892   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.438925   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.439062   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH client type: external
	I0213 23:09:12.439099   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa (-rw-------)
	I0213 23:09:12.439149   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:09:12.439183   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | About to run SSH command:
	I0213 23:09:12.439202   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | exit 0
	I0213 23:09:12.541930   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | SSH cmd err, output: <nil>: 
	I0213 23:09:12.542357   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetConfigRaw
	I0213 23:09:12.543071   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.546226   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546714   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.546747   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546955   49036 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/config.json ...
	I0213 23:09:12.547163   49036 machine.go:88] provisioning docker machine ...
	I0213 23:09:12.547200   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:12.547445   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547594   49036 buildroot.go:166] provisioning hostname "old-k8s-version-245122"
	I0213 23:09:12.547615   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547770   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.550250   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.550734   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550939   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.551160   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551322   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.551648   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.551974   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.552000   49036 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245122 && echo "old-k8s-version-245122" | sudo tee /etc/hostname
	I0213 23:09:12.705495   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245122
	
	I0213 23:09:12.705528   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.708503   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.708860   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.708893   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.709092   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.709277   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709657   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.709831   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.710263   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.710285   49036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245122/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:09:12.858225   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:09:12.858266   49036 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:09:12.858287   49036 buildroot.go:174] setting up certificates
	I0213 23:09:12.858300   49036 provision.go:83] configureAuth start
	I0213 23:09:12.858313   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.858624   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.861374   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861727   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.861759   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.864007   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864334   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.864370   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864549   49036 provision.go:138] copyHostCerts
	I0213 23:09:12.864627   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:09:12.864643   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:09:12.864728   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:09:12.864853   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:09:12.864868   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:09:12.864904   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:09:12.865008   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:09:12.865018   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:09:12.865049   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:09:12.865130   49036 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245122 san=[192.168.50.36 192.168.50.36 localhost 127.0.0.1 minikube old-k8s-version-245122]
	I0213 23:09:12.938444   49036 provision.go:172] copyRemoteCerts
	I0213 23:09:12.938508   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:09:12.938530   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.941384   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.941758   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941989   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.942202   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.942394   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.942545   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.041212   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:09:13.069849   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 23:09:13.092979   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:09:13.115949   49036 provision.go:86] duration metric: configureAuth took 257.625697ms
	I0213 23:09:13.115983   49036 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:09:13.116196   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:13.116279   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.119207   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119644   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.119684   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119901   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.120096   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120288   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120443   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.120599   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.121149   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.121179   49036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:09:13.453399   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:09:13.453431   49036 machine.go:91] provisioned docker machine in 906.25243ms
	I0213 23:09:13.453444   49036 start.go:300] post-start starting for "old-k8s-version-245122" (driver="kvm2")
	I0213 23:09:13.453459   49036 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:09:13.453479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.453816   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:09:13.453849   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.457033   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457355   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.457388   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457560   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.457778   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.457991   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.458207   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.559903   49036 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:09:13.566012   49036 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:09:13.566046   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:09:13.566119   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:09:13.566215   49036 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:09:13.566336   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:09:13.578878   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:13.610396   49036 start.go:303] post-start completed in 156.935564ms
	I0213 23:09:13.610434   49036 fix.go:56] fixHost completed within 25.25543712s
	I0213 23:09:13.610459   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.613960   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614271   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.614330   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614575   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.614828   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615081   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615275   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.615494   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.615954   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.615977   49036 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:09:13.759068   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865753.693690059
	
	I0213 23:09:13.759095   49036 fix.go:206] guest clock: 1707865753.693690059
	I0213 23:09:13.759106   49036 fix.go:219] Guest: 2024-02-13 23:09:13.693690059 +0000 UTC Remote: 2024-02-13 23:09:13.610438113 +0000 UTC m=+362.380845041 (delta=83.251946ms)
	I0213 23:09:13.759130   49036 fix.go:190] guest clock delta is within tolerance: 83.251946ms
	I0213 23:09:13.759136   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 25.404173426s
	I0213 23:09:13.759161   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.759480   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:13.762537   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.762928   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.762967   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.763172   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763718   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763907   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763998   49036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:09:13.764050   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.764122   49036 ssh_runner.go:195] Run: cat /version.json
	I0213 23:09:13.764149   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.767081   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767387   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767526   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767558   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767736   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.767812   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767834   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.768002   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.768190   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768220   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768343   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768370   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.768490   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.886145   49036 ssh_runner.go:195] Run: systemctl --version
	I0213 23:09:13.892222   49036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:09:14.044107   49036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:09:14.051031   49036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:09:14.051134   49036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:09:14.071908   49036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:09:14.071942   49036 start.go:475] detecting cgroup driver to use...
	I0213 23:09:14.072026   49036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:09:14.091007   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:09:14.105419   49036 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:09:14.105501   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:09:14.120760   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:09:14.135296   49036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:09:14.267338   49036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:09:14.403936   49036 docker.go:233] disabling docker service ...
	I0213 23:09:14.404023   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:09:14.419791   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:09:14.434449   49036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:09:14.569365   49036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:09:14.700619   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:09:14.718646   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:09:14.738870   49036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0213 23:09:14.738944   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.750436   49036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:09:14.750529   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.762397   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.773950   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.786798   49036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:09:14.801457   49036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:09:14.813254   49036 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:09:14.813331   49036 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:09:14.830374   49036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:09:14.840984   49036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:09:14.994777   49036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:09:15.193564   49036 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:09:15.193657   49036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:09:15.200616   49036 start.go:543] Will wait 60s for crictl version
	I0213 23:09:15.200749   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:15.205888   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:09:15.249751   49036 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:09:15.249884   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.302320   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.361046   49036 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0213 23:09:15.362396   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:15.365548   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366008   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:15.366041   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366287   49036 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:09:15.370727   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:15.384064   49036 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:09:15.384171   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:15.432027   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:15.432110   49036 ssh_runner.go:195] Run: which lz4
	I0213 23:09:15.436393   49036 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:09:15.440914   49036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:09:15.440956   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0213 23:09:15.218410   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:15.218442   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:15.218457   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.346077   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.346112   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:15.516188   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.523339   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.523371   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.016747   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.024910   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.024944   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.516538   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.528640   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.528673   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:17.016269   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:17.022413   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:09:17.033775   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:09:17.033807   49715 api_server.go:131] duration metric: took 5.51774459s to wait for apiserver health ...
	I0213 23:09:17.033819   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:09:17.033828   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:17.035635   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:17.037195   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:17.064472   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:17.115519   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:17.133771   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:09:17.133887   49715 system_pods.go:61] "coredns-5dd5756b68-cvtjg" [507ded52-9061-4ab7-8298-31847da5dad3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:09:17.133914   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [2ef46644-d4d0-4e8c-b2aa-4e154780be70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:09:17.133952   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [c1f51407-cfd9-4329-9153-2dacb87952c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:09:17.133975   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [1ad24825-8c75-4220-a316-2dd4826da8fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:09:17.133995   49715 system_pods.go:61] "kube-proxy-zzskr" [fb71ceb1-9f9a-4c8b-ae1e-1eeb91706110] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:09:17.134015   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [4500697c-7313-4217-9843-14edb2c7fdb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:09:17.134042   49715 system_pods.go:61] "metrics-server-57f55c9bc5-p97jh" [dc549bc9-87e4-4cb6-99b5-e937f2916d6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:09:17.134063   49715 system_pods.go:61] "storage-provisioner" [c5ad957d-09f9-46e7-b0e7-e7c0b13f671f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:09:17.134081   49715 system_pods.go:74] duration metric: took 18.533785ms to wait for pod list to return data ...
	I0213 23:09:17.134103   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:17.145025   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:17.145131   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:17.145159   49715 node_conditions.go:105] duration metric: took 11.041762ms to run NodePressure ...
	I0213 23:09:17.145201   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:13.466367   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:15.966324   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:14.661158   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:16.663448   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:19.164418   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.224597   49036 crio.go:444] Took 1.788234 seconds to copy over tarball
	I0213 23:09:17.224685   49036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:09:20.618866   49036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.394137292s)
	I0213 23:09:20.618905   49036 crio.go:451] Took 3.394273 seconds to extract the tarball
	I0213 23:09:20.618918   49036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:09:20.665417   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:20.718004   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:20.718036   49036 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.718175   49036 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.718201   49036 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.718126   49036 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.718148   49036 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.718154   49036 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.718181   49036 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719739   49036 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719784   49036 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.719745   49036 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.719855   49036 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.719951   49036 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.720062   49036 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 23:09:20.720172   49036 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.720184   49036 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.877532   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.894803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.906336   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.909341   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.910608   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 23:09:20.933612   49036 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 23:09:20.933664   49036 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.933724   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:20.947803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.979922   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.026909   49036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 23:09:21.026953   49036 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.026986   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.034243   49036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 23:09:21.034279   49036 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.034321   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.053547   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:21.068143   49036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 23:09:21.068194   49036 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 23:09:21.068228   49036 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0213 23:09:21.068195   49036 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0213 23:09:21.068318   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.110630   49036 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 23:09:21.110695   49036 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.110747   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.120732   49036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 23:09:21.120777   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.120781   49036 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.120851   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.120887   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.272660   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0213 23:09:21.272723   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 23:09:21.272771   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.272813   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.272858   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 23:09:21.272914   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.272966   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 23:09:17.706218   49715 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713293   49715 kubeadm.go:787] kubelet initialised
	I0213 23:09:17.713322   49715 kubeadm.go:788] duration metric: took 7.076014ms waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713332   49715 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:17.724146   49715 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:19.733686   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.412892   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.970757   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:20.466081   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.467149   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.660264   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:23.660813   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.375314   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 23:09:21.376306   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 23:09:21.376453   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 23:09:21.376491   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 23:09:21.585135   49036 cache_images.go:92] LoadImages completed in 867.071904ms
	W0213 23:09:21.585230   49036 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0213 23:09:21.585316   49036 ssh_runner.go:195] Run: crio config
	I0213 23:09:21.650741   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:21.650767   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:21.650789   49036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:09:21.650812   49036 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245122 NodeName:old-k8s-version-245122 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 23:09:21.650991   49036 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-245122"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-245122
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.36:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:09:21.651106   49036 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-245122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:09:21.651173   49036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 23:09:21.662478   49036 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:09:21.662558   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:09:21.672654   49036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0213 23:09:21.690609   49036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:09:21.708199   49036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0213 23:09:21.728361   49036 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0213 23:09:21.732450   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:21.747349   49036 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122 for IP: 192.168.50.36
	I0213 23:09:21.747391   49036 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:21.747532   49036 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:09:21.747582   49036 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:09:21.747644   49036 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.key
	I0213 23:09:21.958574   49036 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key.e3c4a843
	I0213 23:09:21.958790   49036 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key
	I0213 23:09:21.958978   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:09:21.959024   49036 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:09:21.959040   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:09:21.959090   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:09:21.959135   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:09:21.959168   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:09:21.959234   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:21.960121   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:09:21.986921   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:09:22.011993   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:09:22.038194   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:09:22.064839   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:09:22.089629   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:09:22.116404   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:09:22.141615   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:09:22.167298   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:09:22.194577   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:09:22.220140   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:09:22.245124   49036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:09:22.265798   49036 ssh_runner.go:195] Run: openssl version
	I0213 23:09:22.273510   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:09:22.287657   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294180   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294261   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.300826   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:09:22.313535   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:09:22.324047   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329069   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329171   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.335862   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:09:22.347417   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:09:22.358082   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363477   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363536   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.369915   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:09:22.380910   49036 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:09:22.385812   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:09:22.392981   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:09:22.400722   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:09:22.409089   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:09:22.417036   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:09:22.423381   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:09:22.430098   49036 kubeadm.go:404] StartCluster: {Name:old-k8s-version-245122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:09:22.430177   49036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:09:22.430246   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:22.490283   49036 cri.go:89] found id: ""
	I0213 23:09:22.490371   49036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:09:22.500902   49036 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:09:22.500931   49036 kubeadm.go:636] restartCluster start
	I0213 23:09:22.501004   49036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:09:22.511985   49036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:22.513298   49036 kubeconfig.go:92] found "old-k8s-version-245122" server: "https://192.168.50.36:8443"
	I0213 23:09:22.516673   49036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:09:22.526466   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:22.526561   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:22.539541   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.027052   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.027161   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.039390   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.527142   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.527234   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.539846   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.027048   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.027144   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.038367   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.526911   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.527012   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.538906   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.027095   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.027195   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.038232   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.526805   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.526911   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.540281   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:26.026811   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.026908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.039699   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.238007   49715 pod_ready.go:92] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:23.238035   49715 pod_ready.go:81] duration metric: took 5.513854942s waiting for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:23.238051   49715 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.744985   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:24.745007   49715 pod_ready.go:81] duration metric: took 1.506948533s waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.745015   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:26.751610   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:24.965048   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:27.465069   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.159564   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:28.660224   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.527051   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.527135   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.539382   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.026915   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.026990   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.038660   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.527300   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.527391   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.539714   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.027042   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.027124   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.039419   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.527549   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.527649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.540659   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.027032   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.027134   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.038415   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.526595   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.526690   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.538928   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.027041   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.027119   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.040125   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.526693   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.526765   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.540060   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:31.026988   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.027096   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.039327   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.755419   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.254128   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.254154   49715 pod_ready.go:81] duration metric: took 6.509132102s waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.254164   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262007   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.262032   49715 pod_ready.go:81] duration metric: took 7.859557ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262042   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267937   49715 pod_ready.go:92] pod "kube-proxy-zzskr" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.267959   49715 pod_ready.go:81] duration metric: took 5.911683ms waiting for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267967   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273442   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.273462   49715 pod_ready.go:81] duration metric: took 5.488135ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273471   49715 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:29.466908   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.965093   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.159176   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.159463   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.526738   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.526879   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.539174   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.026678   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.026780   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.039078   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.527030   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.527120   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.539058   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.539094   49036 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:32.539105   49036 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:32.539116   49036 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:32.539188   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:32.583832   49036 cri.go:89] found id: ""
	I0213 23:09:32.583931   49036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:32.600343   49036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:32.609666   49036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:32.609744   49036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619068   49036 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619093   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:32.751642   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:33.784796   49036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03311496s)
	I0213 23:09:33.784825   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.013311   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.172539   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.290655   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:34.290759   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:34.791649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.290908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.791035   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:33.283651   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.798120   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.966930   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.465311   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.160502   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:37.163077   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.291009   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.791117   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.809796   49036 api_server.go:72] duration metric: took 2.519141205s to wait for apiserver process to appear ...
	I0213 23:09:36.809851   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:36.809880   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:38.282180   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.282368   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:38.466126   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.967293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.811101   49036 api_server.go:269] stopped: https://192.168.50.36:8443/healthz: Get "https://192.168.50.36:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 23:09:41.811184   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.485465   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.485495   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.485516   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.539632   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.539667   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.809967   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.823007   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:42.823043   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.310359   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.318326   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:43.318384   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.809942   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.816666   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:09:43.824593   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:09:43.824622   49036 api_server.go:131] duration metric: took 7.014763564s to wait for apiserver health ...
	I0213 23:09:43.824639   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:43.824647   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:43.826660   49036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:39.659667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.660321   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.664984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.827993   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:43.837268   49036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:43.855659   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:43.864719   49036 system_pods.go:59] 7 kube-system pods found
	I0213 23:09:43.864756   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:09:43.864764   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:09:43.864770   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:09:43.864778   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Pending
	I0213 23:09:43.864783   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:09:43.864789   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:09:43.864795   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:09:43.864803   49036 system_pods.go:74] duration metric: took 9.113954ms to wait for pod list to return data ...
	I0213 23:09:43.864812   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:43.872183   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:43.872222   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:43.872237   49036 node_conditions.go:105] duration metric: took 7.415138ms to run NodePressure ...
	I0213 23:09:43.872269   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:44.129786   49036 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134864   49036 kubeadm.go:787] kubelet initialised
	I0213 23:09:44.134891   49036 kubeadm.go:788] duration metric: took 5.071047ms waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134901   49036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:44.139027   49036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.143942   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143967   49036 pod_ready.go:81] duration metric: took 4.910454ms waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.143978   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143986   49036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.147838   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147923   49036 pod_ready.go:81] duration metric: took 3.927311ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.147935   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147944   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.152465   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152490   49036 pod_ready.go:81] duration metric: took 4.536109ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.152500   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152508   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.259273   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259309   49036 pod_ready.go:81] duration metric: took 106.789068ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.259325   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259334   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.659385   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659423   49036 pod_ready.go:81] duration metric: took 400.079528ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.659436   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659443   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:45.065474   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065510   49036 pod_ready.go:81] duration metric: took 406.055078ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:45.065524   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065533   49036 pod_ready.go:38] duration metric: took 930.621868ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:45.065555   49036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:09:45.100009   49036 ops.go:34] apiserver oom_adj: -16
	I0213 23:09:45.100037   49036 kubeadm.go:640] restartCluster took 22.599099367s
	I0213 23:09:45.100049   49036 kubeadm.go:406] StartCluster complete in 22.6699561s
	I0213 23:09:45.100070   49036 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.100156   49036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:09:45.103031   49036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.103315   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:09:45.103447   49036 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:09:45.103540   49036 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103562   49036 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-245122"
	I0213 23:09:45.103571   49036 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103593   49036 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:45.103603   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:45.103638   49036 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103693   49036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245122"
	W0213 23:09:45.103608   49036 addons.go:243] addon metrics-server should already be in state true
	W0213 23:09:45.103577   49036 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:09:45.103879   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104144   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104215   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104227   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.104318   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.103829   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104877   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104904   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.123332   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0213 23:09:45.123486   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0213 23:09:45.123555   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0213 23:09:45.123964   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124143   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124148   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124449   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124469   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124650   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124674   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124654   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124743   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124965   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125030   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125083   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.125564   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125567   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125598   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.125612   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.129046   49036 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-245122"
	W0213 23:09:45.129065   49036 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:09:45.129085   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.129385   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.129415   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.145900   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0213 23:09:45.146570   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.147144   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.147164   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.147448   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.147635   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.156023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.158533   49036 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:09:45.159815   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:09:45.159837   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:09:45.159862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.163799   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164445   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.164472   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164859   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.165112   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.165340   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.165523   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.166097   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0213 23:09:45.166513   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.167086   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.167111   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.167442   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.167623   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.168284   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0213 23:09:45.168855   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.169453   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.169471   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.169702   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.169992   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.171532   49036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:45.170687   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.172965   49036 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.172979   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.172983   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:09:45.173009   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.176733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177198   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.177232   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177269   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.177506   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.177675   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.177885   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.190339   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0213 23:09:45.190750   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.191239   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.191267   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.191609   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.191803   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.193470   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.193730   49036 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.193748   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:09:45.193769   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.196896   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197422   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.197459   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197745   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.197935   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.198191   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.198301   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.392787   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:09:45.392808   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:09:45.426298   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.440984   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.452209   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:09:45.452239   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:09:45.531203   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:45.531226   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:09:45.593779   49036 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 23:09:45.621016   49036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-245122" context rescaled to 1 replicas
	I0213 23:09:45.621056   49036 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:09:45.623081   49036 out.go:177] * Verifying Kubernetes components...
	I0213 23:09:45.624623   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:09:45.631546   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:46.116692   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116732   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.116735   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116736   49036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:46.116754   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117125   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117172   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117183   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117192   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117201   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117203   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117218   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117228   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117247   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117667   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117671   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117708   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117728   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117962   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117980   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140111   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.140133   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.140411   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.140441   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140431   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.228877   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.228908   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229250   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229273   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229273   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.229283   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.229293   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229523   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229538   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229558   49036 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:46.231176   49036 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:09:46.232329   49036 addons.go:505] enable addons completed in 1.128872958s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:09:42.783163   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:44.783634   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.281934   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.465665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:45.964909   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:46.160084   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.664267   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.120153   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:50.120636   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:49.781808   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.281392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.968701   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:50.465488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:51.161059   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:53.662099   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.121578   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:53.120859   49036 node_ready.go:49] node "old-k8s-version-245122" has status "Ready":"True"
	I0213 23:09:53.120885   49036 node_ready.go:38] duration metric: took 7.004121529s waiting for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:53.120896   49036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:53.129174   49036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:55.136200   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.283011   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.286197   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.964530   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.964679   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.966183   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.159475   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.160233   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:57.636373   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.137616   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.782611   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:59.465313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.465877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.660202   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.159244   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:02.635052   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:04.636231   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.284083   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.781701   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.966234   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.465225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.160136   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.160817   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.161703   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.636789   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.135398   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.135441   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.782000   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.782948   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.785161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:08.465688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:10.967225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.658937   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.661460   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.138346   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.636437   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:14.282538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.781339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.465521   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.965224   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.162065   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:18.658525   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.648838   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.137226   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:19.282514   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:21.781917   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.966716   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.464644   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.465071   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.659514   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.662481   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.636371   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.136197   49036 pod_ready.go:92] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.136234   49036 pod_ready.go:81] duration metric: took 31.007029263s waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.136249   49036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142089   49036 pod_ready.go:92] pod "etcd-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.142114   49036 pod_ready.go:81] duration metric: took 5.854061ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142127   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149372   49036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.149396   49036 pod_ready.go:81] duration metric: took 7.261015ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149409   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158342   49036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.158371   49036 pod_ready.go:81] duration metric: took 8.953577ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158384   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165154   49036 pod_ready.go:92] pod "kube-proxy-nj7qx" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.165177   49036 pod_ready.go:81] duration metric: took 6.785683ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165186   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533838   49036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.533863   49036 pod_ready.go:81] duration metric: took 368.670292ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533896   49036 pod_ready.go:38] duration metric: took 31.412988042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:10:24.533912   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:10:24.534007   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:10:24.549186   49036 api_server.go:72] duration metric: took 38.928101792s to wait for apiserver process to appear ...
	I0213 23:10:24.549217   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:10:24.549238   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:10:24.557366   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:10:24.558364   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:10:24.558387   49036 api_server.go:131] duration metric: took 9.165129ms to wait for apiserver health ...
	I0213 23:10:24.558396   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:10:24.736365   49036 system_pods.go:59] 8 kube-system pods found
	I0213 23:10:24.736396   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:24.736401   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:24.736405   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:24.736409   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:24.736413   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:24.736417   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:24.736423   49036 system_pods.go:61] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:24.736429   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:24.736437   49036 system_pods.go:74] duration metric: took 178.035411ms to wait for pod list to return data ...
	I0213 23:10:24.736444   49036 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:10:24.934360   49036 default_sa.go:45] found service account: "default"
	I0213 23:10:24.934390   49036 default_sa.go:55] duration metric: took 197.940334ms for default service account to be created ...
	I0213 23:10:24.934400   49036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:10:25.135904   49036 system_pods.go:86] 8 kube-system pods found
	I0213 23:10:25.135933   49036 system_pods.go:89] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:25.135940   49036 system_pods.go:89] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:25.135944   49036 system_pods.go:89] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:25.135949   49036 system_pods.go:89] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:25.135954   49036 system_pods.go:89] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:25.135959   49036 system_pods.go:89] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:25.135967   49036 system_pods.go:89] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:25.135973   49036 system_pods.go:89] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:25.135982   49036 system_pods.go:126] duration metric: took 201.576732ms to wait for k8s-apps to be running ...
	I0213 23:10:25.135992   49036 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:10:25.136035   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:10:25.151540   49036 system_svc.go:56] duration metric: took 15.53628ms WaitForService to wait for kubelet.
	I0213 23:10:25.151582   49036 kubeadm.go:581] duration metric: took 39.530502672s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:10:25.151608   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:10:25.333026   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:10:25.333067   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:10:25.333083   49036 node_conditions.go:105] duration metric: took 181.468311ms to run NodePressure ...
	I0213 23:10:25.333171   49036 start.go:228] waiting for startup goroutines ...
	I0213 23:10:25.333186   49036 start.go:233] waiting for cluster config update ...
	I0213 23:10:25.333200   49036 start.go:242] writing updated cluster config ...
	I0213 23:10:25.333540   49036 ssh_runner.go:195] Run: rm -f paused
	I0213 23:10:25.385974   49036 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0213 23:10:25.388225   49036 out.go:177] 
	W0213 23:10:25.389965   49036 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0213 23:10:25.391288   49036 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0213 23:10:25.392550   49036 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-245122" cluster and "default" namespace by default
	I0213 23:10:24.281840   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.782341   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.467427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.965363   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:25.158811   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:27.158903   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.162245   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.283592   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.781156   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.465534   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.965570   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.163299   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.664184   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:34.281475   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.282050   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.966548   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.465588   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.159425   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.161056   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.781806   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.782565   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.465618   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.966613   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.659031   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.660105   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:43.282453   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.782436   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.967065   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.465277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.161783   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.659092   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:48.281903   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:50.782326   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.965978   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.972688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:52.464489   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.661150   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:51.661183   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.159746   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:53.280877   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:55.281432   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.465386   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.966020   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.659863   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.161127   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:57.781250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:00.283244   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.464959   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.466871   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.660636   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:04.162081   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:02.782971   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.282593   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:03.964986   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.967545   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:06.660761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.663916   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:07.783437   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.280975   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.281595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.466954   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.965354   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:11.159761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:13.160656   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:14.281819   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:16.781331   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.965830   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.464980   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.659894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.659996   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:18.782849   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.281343   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.965490   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.965841   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:22.465427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.660194   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.660348   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.158929   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:23.281731   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:25.282299   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.966008   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.463392   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:26.160687   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:28.160792   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.783770   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.282652   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:29.464941   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:31.965436   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.160850   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.661971   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.781595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.282110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:33.966260   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:36.465148   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.160093   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.160571   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.782870   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.281536   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:38.466898   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.965121   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:39.659930   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.160848   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.782134   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.287871   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.966494   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:45.465485   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.477988   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.659259   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:46.660566   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.165414   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.781501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.282150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.965827   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.465337   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:51.658915   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.160444   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.286142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.783072   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.465900   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.466029   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.659103   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.660419   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.784481   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.282749   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.965179   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.465662   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:00.661165   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.161035   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.787946   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:06.281932   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.964460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.966240   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.660384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.159544   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.781709   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.782556   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.465300   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.472665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.660651   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.159097   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.281500   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.781953   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:12.965510   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:14.966435   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.465559   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.160583   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.659605   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.784167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:20.280384   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:22.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.468825   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.965088   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.659644   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.662561   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.160923   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.781351   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:27.281938   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:23.966646   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.465094   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.160986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.161300   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:29.780690   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.282298   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.965450   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:31.467937   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:30.659169   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.659681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.782495   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.782679   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:33.965594   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.465409   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.660174   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.660802   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.160838   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.281205   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.281734   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:38.465702   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:40.965477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.659732   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:44.159873   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:43.780979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.781438   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:42.966342   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.464993   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.465742   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:46.162330   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:48.659964   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.782513   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:50.281255   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:52.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:49.967402   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.968499   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.161451   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:53.659594   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.782653   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.782779   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.465429   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.466199   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:55.659986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:57.661028   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:59.280842   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.281110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:58.965410   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:00.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.458755   49120 pod_ready.go:81] duration metric: took 4m0.00109163s waiting for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:01.458812   49120 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:01.458839   49120 pod_ready.go:38] duration metric: took 4m13.051566827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:01.458873   49120 kubeadm.go:640] restartCluster took 4m33.496925279s
	W0213 23:13:01.458967   49120 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:01.459008   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:00.160188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:02.663549   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:03.285939   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.782469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.165196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:07.661417   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:08.283394   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.286257   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.161461   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.652828   49443 pod_ready.go:81] duration metric: took 4m0.001101625s waiting for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:10.652857   49443 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:10.652877   49443 pod_ready.go:38] duration metric: took 4m11.564476633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:10.652905   49443 kubeadm.go:640] restartCluster took 4m34.344806193s
	W0213 23:13:10.652970   49443 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:10.652997   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:12.782042   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:15.282782   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:16.418651   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.959611919s)
	I0213 23:13:16.418750   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:16.435137   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:16.448436   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:16.459777   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:16.459826   49120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:16.708111   49120 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:17.782474   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:20.283238   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:22.782418   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:24.782894   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:26.784203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:28.667785   49120 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0213 23:13:28.667865   49120 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:28.668000   49120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:28.668151   49120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:28.668282   49120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:28.668372   49120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:28.670147   49120 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:28.670266   49120 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:28.670367   49120 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:28.670480   49120 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:28.670559   49120 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:28.670674   49120 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:28.670763   49120 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:28.670864   49120 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:28.670964   49120 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:28.671068   49120 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:28.671163   49120 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:28.671221   49120 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:28.671296   49120 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:28.671368   49120 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:28.671440   49120 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0213 23:13:28.671506   49120 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:28.671580   49120 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:28.671658   49120 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:28.671734   49120 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:28.671791   49120 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:28.673351   49120 out.go:204]   - Booting up control plane ...
	I0213 23:13:28.673448   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:28.673535   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:28.673627   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:28.673744   49120 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:28.673846   49120 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:28.673903   49120 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:28.674084   49120 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:28.674176   49120 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.010705 seconds
	I0213 23:13:28.674315   49120 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:28.674470   49120 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:28.674543   49120 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:28.674766   49120 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-778731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:28.674832   49120 kubeadm.go:322] [bootstrap-token] Using token: dwjaqi.e4fr4bxqfdq63m9e
	I0213 23:13:28.676266   49120 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:28.676392   49120 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:28.676495   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:28.676671   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:28.676871   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:28.677028   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:28.677142   49120 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:28.677283   49120 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:28.677337   49120 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:28.677392   49120 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:28.677405   49120 kubeadm.go:322] 
	I0213 23:13:28.677476   49120 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:28.677488   49120 kubeadm.go:322] 
	I0213 23:13:28.677586   49120 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:28.677599   49120 kubeadm.go:322] 
	I0213 23:13:28.677631   49120 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:28.677712   49120 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:28.677780   49120 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:28.677793   49120 kubeadm.go:322] 
	I0213 23:13:28.677864   49120 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:28.677881   49120 kubeadm.go:322] 
	I0213 23:13:28.677941   49120 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:28.677948   49120 kubeadm.go:322] 
	I0213 23:13:28.678019   49120 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:28.678125   49120 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:28.678215   49120 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:28.678223   49120 kubeadm.go:322] 
	I0213 23:13:28.678324   49120 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:28.678426   49120 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:28.678433   49120 kubeadm.go:322] 
	I0213 23:13:28.678544   49120 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.678685   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:28.678714   49120 kubeadm.go:322] 	--control-plane 
	I0213 23:13:28.678722   49120 kubeadm.go:322] 
	I0213 23:13:28.678834   49120 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:28.678841   49120 kubeadm.go:322] 
	I0213 23:13:28.678950   49120 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.679094   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:28.679106   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:13:28.679116   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:28.680826   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:25.241610   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.588591305s)
	I0213 23:13:25.241679   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:25.257221   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:25.271651   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:25.285556   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:25.285615   49443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:25.530438   49443 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:29.281713   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:31.274625   49715 pod_ready.go:81] duration metric: took 4m0.00114055s waiting for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:31.274654   49715 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:31.274676   49715 pod_ready.go:38] duration metric: took 4m13.561333764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:31.274700   49715 kubeadm.go:640] restartCluster took 4m33.95094669s
	W0213 23:13:31.274766   49715 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:31.274807   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:28.682020   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:28.710027   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:28.752989   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:28.753118   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:28.753117   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=no-preload-778731 minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.147657   49120 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:29.147806   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.647920   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.648105   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.148819   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.648877   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.647939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.005257   49443 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:37.005340   49443 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:37.005464   49443 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:37.005611   49443 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:37.005750   49443 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:37.005836   49443 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:37.007501   49443 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:37.007606   49443 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:37.007687   49443 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:37.007782   49443 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:37.007869   49443 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:37.007960   49443 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:37.008047   49443 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:37.008139   49443 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:37.008221   49443 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:37.008324   49443 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:37.008437   49443 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:37.008488   49443 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:37.008577   49443 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:37.008657   49443 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:37.008742   49443 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:37.008837   49443 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:37.008916   49443 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:37.009044   49443 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:37.009150   49443 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:37.010808   49443 out.go:204]   - Booting up control plane ...
	I0213 23:13:37.010943   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:37.011053   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:37.011155   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:37.011537   49443 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:37.011661   49443 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:37.011720   49443 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:37.011915   49443 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:37.012024   49443 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005842 seconds
	I0213 23:13:37.012154   49443 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:37.012297   49443 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:37.012376   49443 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:37.012595   49443 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-340656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:37.012668   49443 kubeadm.go:322] [bootstrap-token] Using token: 0y2cx5.j4vucgv3wtut6xkw
	I0213 23:13:37.014296   49443 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:37.014433   49443 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:37.014535   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:37.014697   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:37.014837   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:37.014966   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:37.015073   49443 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:37.015203   49443 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:37.015256   49443 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:37.015316   49443 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:37.015326   49443 kubeadm.go:322] 
	I0213 23:13:37.015393   49443 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:37.015403   49443 kubeadm.go:322] 
	I0213 23:13:37.015500   49443 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:37.015511   49443 kubeadm.go:322] 
	I0213 23:13:37.015535   49443 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:37.015603   49443 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:37.015668   49443 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:37.015677   49443 kubeadm.go:322] 
	I0213 23:13:37.015744   49443 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:37.015754   49443 kubeadm.go:322] 
	I0213 23:13:37.015814   49443 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:37.015824   49443 kubeadm.go:322] 
	I0213 23:13:37.015889   49443 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:37.015981   49443 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:37.016075   49443 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:37.016087   49443 kubeadm.go:322] 
	I0213 23:13:37.016182   49443 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:37.016272   49443 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:37.016282   49443 kubeadm.go:322] 
	I0213 23:13:37.016371   49443 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016486   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:37.016522   49443 kubeadm.go:322] 	--control-plane 
	I0213 23:13:37.016527   49443 kubeadm.go:322] 
	I0213 23:13:37.016637   49443 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:37.016643   49443 kubeadm.go:322] 
	I0213 23:13:37.016739   49443 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016875   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:37.016887   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:13:37.016895   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:37.018483   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:33.148023   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:33.648861   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.147939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.648160   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.148620   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.648710   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.148263   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.648202   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.148597   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.648067   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.019795   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:37.080689   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:37.145132   49443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:37.145273   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.145374   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=embed-certs-340656 minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.195322   49443 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:37.575387   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.075523   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.575550   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.075996   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.148294   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.648747   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.148671   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.648021   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.148566   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.648799   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.148354   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.257502   49120 kubeadm.go:1088] duration metric: took 12.504501087s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:41.257549   49120 kubeadm.go:406] StartCluster complete in 5m13.347836612s
	I0213 23:13:41.257573   49120 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.257681   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:41.260299   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.260647   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:41.260677   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:41.260755   49120 addons.go:69] Setting storage-provisioner=true in profile "no-preload-778731"
	I0213 23:13:41.260779   49120 addons.go:234] Setting addon storage-provisioner=true in "no-preload-778731"
	W0213 23:13:41.260787   49120 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:41.260777   49120 addons.go:69] Setting metrics-server=true in profile "no-preload-778731"
	I0213 23:13:41.260807   49120 addons.go:234] Setting addon metrics-server=true in "no-preload-778731"
	W0213 23:13:41.260815   49120 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:41.260840   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260858   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260882   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:13:41.261207   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261227   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261267   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261291   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261426   49120 addons.go:69] Setting default-storageclass=true in profile "no-preload-778731"
	I0213 23:13:41.261447   49120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-778731"
	I0213 23:13:41.261807   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261899   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.278449   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0213 23:13:41.278646   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0213 23:13:41.278874   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.278992   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.279367   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279389   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279460   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279485   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279748   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.279929   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.280301   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280345   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280389   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280403   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0213 23:13:41.280420   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280729   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.281302   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.281324   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.281723   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.281932   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.286017   49120 addons.go:234] Setting addon default-storageclass=true in "no-preload-778731"
	W0213 23:13:41.286039   49120 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:41.286067   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.286476   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.286511   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.299018   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0213 23:13:41.299266   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0213 23:13:41.299626   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.299951   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.300111   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300127   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300624   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300656   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300707   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.300885   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.301280   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.301628   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.303270   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.304846   49120 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:41.303809   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.306034   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:41.306048   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:41.306068   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.307731   49120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:41.309028   49120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.309045   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:41.309065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.309214   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0213 23:13:41.309635   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.309722   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310208   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.310227   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.310342   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.310379   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310514   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.310731   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.310877   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.310900   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.311093   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.311466   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.311516   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.312194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312559   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.312580   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312814   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.313006   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.313140   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.313283   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.327021   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0213 23:13:41.327605   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.328038   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.328055   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.328399   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.328596   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.330082   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.330333   49120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.330344   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:41.330356   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.333321   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333703   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.333731   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.334075   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.334494   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.334643   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.502879   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:41.534876   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:41.534908   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:41.587429   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.589619   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.616755   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:41.616783   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:41.688015   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.688039   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:41.777647   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.844418   49120 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-778731" context rescaled to 1 replicas
	I0213 23:13:41.844460   49120 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:41.847252   49120 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:41.848614   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:42.311509   49120 start.go:929] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:42.915046   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.327574246s)
	I0213 23:13:42.915112   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915127   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915219   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.325575731s)
	I0213 23:13:42.915241   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915250   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915430   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.915467   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.915475   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.915485   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915493   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917607   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917640   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917673   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917652   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917719   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917730   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917764   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.917773   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917996   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.918014   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.963310   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.963336   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.963632   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.963652   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999467   49120 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.150816624s)
	I0213 23:13:42.999513   49120 node_ready.go:35] waiting up to 6m0s for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:42.999542   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221849263s)
	I0213 23:13:42.999604   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999620   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.999914   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.999932   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999944   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999953   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:43.000322   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:43.000341   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:43.000355   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:43.000372   49120 addons.go:470] Verifying addon metrics-server=true in "no-preload-778731"
	I0213 23:13:43.003022   49120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:39.575883   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.076191   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.575969   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.075959   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.576297   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.075511   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.575528   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.076112   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.575825   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:44.076340   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.156104   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.881268834s)
	I0213 23:13:46.156183   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:46.173816   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:46.185578   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:46.196865   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:46.196911   49715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:46.251785   49715 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:46.251863   49715 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:46.416331   49715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:46.416503   49715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:46.416643   49715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:46.690351   49715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:46.692352   49715 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:46.692470   49715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:46.692583   49715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:46.692710   49715 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:46.692812   49715 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:46.692929   49715 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:46.693027   49715 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:46.693116   49715 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:46.693220   49715 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:46.693322   49715 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:46.693423   49715 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:46.693480   49715 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:46.693559   49715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:46.919270   49715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:47.096236   49715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:47.207058   49715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:47.262083   49715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:47.262614   49715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:47.265288   49715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:47.267143   49715 out.go:204]   - Booting up control plane ...
	I0213 23:13:47.267277   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:47.267383   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:47.267570   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:47.284718   49715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:47.286027   49715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:47.286152   49715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:47.443974   49715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:43.004170   49120 addons.go:505] enable addons completed in 1.743494195s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:43.030538   49120 node_ready.go:49] node "no-preload-778731" has status "Ready":"True"
	I0213 23:13:43.030566   49120 node_ready.go:38] duration metric: took 31.039482ms waiting for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:43.030581   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:43.041854   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:43.085259   49120 pod_ready.go:97] pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085310   49120 pod_ready.go:81] duration metric: took 43.414984ms waiting for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:43.085328   49120 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085337   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094656   49120 pod_ready.go:92] pod "coredns-76f75df574-f4g5w" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.094686   49120 pod_ready.go:81] duration metric: took 2.009341273s waiting for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094696   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101331   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.101352   49120 pod_ready.go:81] duration metric: took 6.650644ms waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101362   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108662   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.108686   49120 pod_ready.go:81] duration metric: took 7.317621ms waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108695   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115600   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.115620   49120 pod_ready.go:81] duration metric: took 6.918739ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115629   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403942   49120 pod_ready.go:92] pod "kube-proxy-7vcqq" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.403977   49120 pod_ready.go:81] duration metric: took 288.33703ms waiting for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403990   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804609   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.804646   49120 pod_ready.go:81] duration metric: took 400.646621ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804661   49120 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:44.575423   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.076435   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.575498   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.076393   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.575716   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.075439   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.575623   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.076149   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.575619   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.757507   49443 kubeadm.go:1088] duration metric: took 11.612278698s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:48.757567   49443 kubeadm.go:406] StartCluster complete in 5m12.504615736s
	I0213 23:13:48.757592   49443 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.757689   49443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:48.760402   49443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.760794   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:48.761145   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:13:48.761320   49443 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:48.761392   49443 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-340656"
	I0213 23:13:48.761411   49443 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-340656"
	W0213 23:13:48.761420   49443 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:48.761470   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762064   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762094   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762173   49443 addons.go:69] Setting default-storageclass=true in profile "embed-certs-340656"
	I0213 23:13:48.762208   49443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-340656"
	I0213 23:13:48.762334   49443 addons.go:69] Setting metrics-server=true in profile "embed-certs-340656"
	I0213 23:13:48.762359   49443 addons.go:234] Setting addon metrics-server=true in "embed-certs-340656"
	W0213 23:13:48.762368   49443 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:48.762418   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762605   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762642   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762770   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762812   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.782845   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0213 23:13:48.782988   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0213 23:13:48.782993   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0213 23:13:48.783453   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783578   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783583   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.784018   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784038   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784160   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784177   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784197   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784211   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784431   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784636   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.784704   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784781   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.785241   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785264   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.785910   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785952   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.795703   49443 addons.go:234] Setting addon default-storageclass=true in "embed-certs-340656"
	W0213 23:13:48.795803   49443 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:48.795847   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.796295   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.796352   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.805562   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0213 23:13:48.806234   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.815444   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0213 23:13:48.815451   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.815558   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.817565   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.817770   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.818164   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.818796   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.818815   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.819308   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0213 23:13:48.819537   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.819661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.819723   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.821798   49443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:48.820119   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.821685   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.823106   49443 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:48.823122   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:48.823142   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.824803   49443 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:48.826431   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.826467   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:48.826487   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:48.826507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.826393   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.826536   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.827054   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.827129   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.827155   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.827617   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.828067   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.828089   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.828119   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.828335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.828539   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.830417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.831572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.831604   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.832609   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.832827   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.832999   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.833165   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.851188   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0213 23:13:48.851868   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.852446   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.852482   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.852913   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.853134   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.855360   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.855766   49443 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:48.855792   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:48.855810   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.859610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.859877   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.859915   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.860263   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.860507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.860699   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.860854   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:49.015561   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:49.019336   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:49.047556   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:49.047593   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:49.083994   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:49.109749   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:49.109778   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:49.196430   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.196459   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:49.297603   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.306053   49443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-340656" context rescaled to 1 replicas
	I0213 23:13:49.306112   49443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:49.307559   49443 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:49.308883   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:51.125630   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109969214s)
	I0213 23:13:51.125663   49443 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:51.492579   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473198087s)
	I0213 23:13:51.492655   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492672   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492587   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.408541587s)
	I0213 23:13:51.492794   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492820   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493027   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493041   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493052   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493061   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493362   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493392   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493401   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493458   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493492   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493501   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493511   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493520   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493768   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493791   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.550911   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.550944   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.551267   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.551319   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.728993   49443 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.420033663s)
	I0213 23:13:51.729078   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.431431547s)
	I0213 23:13:51.729114   49443 node_ready.go:35] waiting up to 6m0s for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.729135   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729163   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729446   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729462   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729473   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729483   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729770   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.729803   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729813   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729823   49443 addons.go:470] Verifying addon metrics-server=true in "embed-certs-340656"
	I0213 23:13:51.732785   49443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:47.812862   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:49.820823   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:52.318873   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:51.733634   49443 addons.go:505] enable addons completed in 2.972313278s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:51.741252   49443 node_ready.go:49] node "embed-certs-340656" has status "Ready":"True"
	I0213 23:13:51.741279   49443 node_ready.go:38] duration metric: took 12.133263ms waiting for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.741290   49443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:51.749409   49443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766298   49443 pod_ready.go:92] pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.766331   49443 pod_ready.go:81] duration metric: took 1.01688514s waiting for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766345   49443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777697   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.777725   49443 pod_ready.go:81] duration metric: took 11.371663ms waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777738   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789006   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.789030   49443 pod_ready.go:81] duration metric: took 11.286651ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789040   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798798   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.798820   49443 pod_ready.go:81] duration metric: took 9.773358ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798829   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807522   49443 pod_ready.go:92] pod "kube-proxy-4vgt5" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:53.807555   49443 pod_ready.go:81] duration metric: took 1.00871819s waiting for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807569   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133771   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:54.133808   49443 pod_ready.go:81] duration metric: took 326.228368ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133819   49443 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:55.947176   49715 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502842 seconds
	I0213 23:13:55.947340   49715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:55.968064   49715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:56.503592   49715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:56.503798   49715 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-083863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:57.020246   49715 kubeadm.go:322] [bootstrap-token] Using token: 1sfxye.gyrkuj525fbtgg0g
	I0213 23:13:57.021591   49715 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:57.021724   49715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:57.028718   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:57.038574   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:57.046578   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:57.051622   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:57.065769   49715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:57.091404   49715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:57.330768   49715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:57.436406   49715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:57.436445   49715 kubeadm.go:322] 
	I0213 23:13:57.436542   49715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:57.436556   49715 kubeadm.go:322] 
	I0213 23:13:57.436650   49715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:57.436681   49715 kubeadm.go:322] 
	I0213 23:13:57.436729   49715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:57.436813   49715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:57.436887   49715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:57.436898   49715 kubeadm.go:322] 
	I0213 23:13:57.436989   49715 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:57.437002   49715 kubeadm.go:322] 
	I0213 23:13:57.437067   49715 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:57.437078   49715 kubeadm.go:322] 
	I0213 23:13:57.437137   49715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:57.437227   49715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:57.437344   49715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:57.437365   49715 kubeadm.go:322] 
	I0213 23:13:57.437463   49715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:57.437561   49715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:57.437577   49715 kubeadm.go:322] 
	I0213 23:13:57.437713   49715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.437878   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:57.437915   49715 kubeadm.go:322] 	--control-plane 
	I0213 23:13:57.437925   49715 kubeadm.go:322] 
	I0213 23:13:57.438021   49715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:57.438032   49715 kubeadm.go:322] 
	I0213 23:13:57.438140   49715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.438284   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:57.438602   49715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:57.438886   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:13:57.438904   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:57.440968   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:57.442459   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:57.466652   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:57.538217   49715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:57.538279   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:57.538289   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=default-k8s-diff-port-083863 minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:54.320129   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.812983   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.141892   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:58.143201   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:57.914767   49715 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:57.914957   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.415274   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.915866   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.415351   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.915329   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.415646   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.915129   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.415803   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.915716   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:02.415378   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.815013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:01.312236   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:00.645227   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:03.145517   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:02.915447   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.415367   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.915183   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.416047   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.915850   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.415867   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.915570   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.415580   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.915010   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:07.415431   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.314560   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.817591   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.642499   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.644055   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.916067   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.415001   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.915359   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.415672   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.915997   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:10.105267   49715 kubeadm.go:1088] duration metric: took 12.567044904s to wait for elevateKubeSystemPrivileges.
	I0213 23:14:10.105293   49715 kubeadm.go:406] StartCluster complete in 5m12.839656692s
	I0213 23:14:10.105310   49715 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.105392   49715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:14:10.107335   49715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.107629   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:14:10.107747   49715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:14:10.107821   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:14:10.107841   49715 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107858   49715 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107866   49715 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-083863"
	I0213 23:14:10.107873   49715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-083863"
	W0213 23:14:10.107878   49715 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:14:10.107885   49715 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107905   49715 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.107917   49715 addons.go:243] addon metrics-server should already be in state true
	I0213 23:14:10.107941   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.107961   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.108282   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108352   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108368   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108382   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108392   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108355   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.124618   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0213 23:14:10.124636   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0213 23:14:10.125154   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125261   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125984   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.125990   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.126014   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126029   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126422   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126501   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126604   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.127038   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.127067   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131142   49715 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.131168   49715 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:14:10.131196   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.131628   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.131661   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131866   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0213 23:14:10.132342   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.133024   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.133044   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.133539   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.134069   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.134119   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.145244   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0213 23:14:10.145674   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.146213   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.146233   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.146642   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.146845   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.148779   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.151227   49715 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:14:10.152983   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:14:10.153004   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:14:10.150602   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0213 23:14:10.153029   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.154229   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.154857   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.154876   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.155560   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.156429   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.156476   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.156757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.157450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157680   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.157898   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.158068   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.158211   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.159437   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0213 23:14:10.159780   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.160316   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.160328   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.160712   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.160874   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.163133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.166002   49715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:14:10.168221   49715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.168239   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:14:10.168259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.172119   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172539   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.172562   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172800   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.173447   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.173609   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.173769   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.175322   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0213 23:14:10.175719   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.176212   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.176223   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.176556   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.176727   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.178938   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.179149   49715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.179163   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:14:10.179174   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.182253   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.182739   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.182773   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.183106   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.183259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.183425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.183534   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.327834   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:14:10.327857   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:14:10.362507   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.405623   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:14:10.405655   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:14:10.413284   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.427964   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:14:10.459317   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.459343   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:14:10.552860   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.687588   49715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-083863" context rescaled to 1 replicas
	I0213 23:14:10.687640   49715 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:14:10.689888   49715 out.go:177] * Verifying Kubernetes components...
	I0213 23:14:10.691656   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:14:08.312251   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:10.313161   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.313239   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.671905   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.309368382s)
	I0213 23:14:12.671963   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.258642736s)
	I0213 23:14:12.671974   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.671999   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672008   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244007691s)
	I0213 23:14:12.672048   49715 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 23:14:12.672013   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672319   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672358   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672414   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672428   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672440   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672391   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672502   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672511   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672522   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672672   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672713   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672825   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672842   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672845   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.718598   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.718635   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.718899   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.718948   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.718957   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992151   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.439242656s)
	I0213 23:14:12.992169   49715 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.300483548s)
	I0213 23:14:12.992204   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992208   49715 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:12.992219   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.992608   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.992650   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.992674   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992694   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992706   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.993012   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.993033   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.993082   49715 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-083863"
	I0213 23:14:12.994959   49715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:14:10.144369   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.642284   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.996304   49715 addons.go:505] enable addons completed in 2.888556474s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:14:13.017331   49715 node_ready.go:49] node "default-k8s-diff-port-083863" has status "Ready":"True"
	I0213 23:14:13.017356   49715 node_ready.go:38] duration metric: took 25.135832ms waiting for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:13.017369   49715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:14:13.040090   49715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047064   49715 pod_ready.go:92] pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.047105   49715 pod_ready.go:81] duration metric: took 2.006967952s waiting for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047119   49715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052773   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.052793   49715 pod_ready.go:81] duration metric: took 5.668033ms waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052801   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.057989   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.058012   49715 pod_ready.go:81] duration metric: took 5.204253ms waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.058024   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063408   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.063426   49715 pod_ready.go:81] duration metric: took 5.394681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063434   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068502   49715 pod_ready.go:92] pod "kube-proxy-kvz2b" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.068523   49715 pod_ready.go:81] duration metric: took 5.082168ms waiting for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068534   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445109   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.445132   49715 pod_ready.go:81] duration metric: took 376.590631ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445142   49715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:17.453588   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:14.816746   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.313290   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:15.141901   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.641098   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.453805   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.954116   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.812763   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.814338   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.641389   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.641735   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.142168   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.455003   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.952168   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.312468   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.813420   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.641722   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.141082   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:28.954054   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:30.954647   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.311343   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.312249   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.143011   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.642102   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.452218   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.453522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.457001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.314313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.812309   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:36.143532   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:38.640894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:39.955206   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.456339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.813776   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.314111   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.642572   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:43.141919   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:44.955150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.454324   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.813470   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.313382   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.143485   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.641760   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.954167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.453822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.814576   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:50.312600   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.313062   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.642698   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.141500   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.141646   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.454979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.953279   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.812403   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.813413   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.142104   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:58.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.453692   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.952522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.313705   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.813002   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:00.642441   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:02.644754   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.954032   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.453202   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.813780   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.312152   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:04.645545   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:07.142188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.454411   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:10.953929   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.813133   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.315282   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:09.641331   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.644066   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:14.141197   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.452937   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:15.453227   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:17.455142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.814488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.312013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.142256   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:19.956449   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.454447   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.313100   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.315124   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.642516   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:23.141725   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.955277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:26.956469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.813277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.813332   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.313503   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:25.148206   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.642527   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.453659   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:31.953193   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.812921   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.311859   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.642812   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.141177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.141385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.452179   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.454250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.312263   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.812360   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.642681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.142639   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:38.952639   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:40.953841   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.311603   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.312975   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.640004   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.641689   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:42.954046   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.453175   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.812207   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:46.313761   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.642354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.141466   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:47.953013   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.455958   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.813689   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:51.312695   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.144359   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.145852   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.952203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.960421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.455215   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:53.312858   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:55.313197   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.313493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.642775   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.142159   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.143780   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.953718   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.954907   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.813086   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:02.313743   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.640609   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:03.641712   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.453269   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:06.454001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.813366   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.313460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:05.642520   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.644309   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:08.454568   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.953538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:09.315454   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:11.814145   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.142385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.644175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.953619   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.452015   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.455884   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:14.311599   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:16.312822   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.143506   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.643647   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:19.952742   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:21.953464   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:18.314298   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.812863   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.142175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:22.641953   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.953599   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.953715   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.312368   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.813170   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:24.642939   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:27.143008   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.452587   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.454360   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.314038   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.812058   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:29.642029   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.141959   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.142628   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.955547   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:35.453428   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.456558   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.813040   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.813607   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.314673   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:36.143091   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:38.147685   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.953073   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:42.452724   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.811843   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:41.811877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:40.645177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.140828   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:44.453277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.453393   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.813703   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.312231   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:45.141859   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:47.142843   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.453508   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.456357   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.312293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.812918   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:49.641676   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.142518   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.951784   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.954108   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.455497   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:53.312477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:55.313195   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.642918   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.141241   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.141855   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.954832   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.455675   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.811554   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.813709   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.313752   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:01.142778   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:03.143196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.953816   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.953967   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.812917   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.814681   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:05.644404   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:07.644824   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.455392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.953935   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.312828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.811876   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:10.141985   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:12.642984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.453572   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.454161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.314828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.813786   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:15.143013   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:17.143864   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.144089   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:18.952608   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:20.952810   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.312837   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.316700   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.641354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:24.142975   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:22.953607   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.453091   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.454501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:23.811674   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.814225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:26.640796   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:28.642684   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:29.952519   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.453137   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.816563   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.314052   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.642932   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:33.142380   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.456778   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.459583   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.812724   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.812895   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.813814   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:35.641888   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.144690   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.952822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.956268   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.821433   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:41.313306   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.641240   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:42.641667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.453378   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.953398   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.313457   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812519   49120 pod_ready.go:81] duration metric: took 4m0.007851911s waiting for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:45.812528   49120 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:45.812534   49120 pod_ready.go:38] duration metric: took 4m2.781943239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:45.812548   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:45.812574   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:45.812640   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:45.881239   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:45.881267   49120 cri.go:89] found id: ""
	I0213 23:17:45.881277   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:45.881327   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.886446   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:45.886531   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:45.926920   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:45.926947   49120 cri.go:89] found id: ""
	I0213 23:17:45.926955   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:45.927000   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.931500   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:45.931577   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:45.979081   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:45.979109   49120 cri.go:89] found id: ""
	I0213 23:17:45.979119   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:45.979174   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.984481   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:45.984539   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:46.035365   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.035385   49120 cri.go:89] found id: ""
	I0213 23:17:46.035392   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:46.035438   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.039634   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:46.039695   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:46.087404   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:46.087429   49120 cri.go:89] found id: ""
	I0213 23:17:46.087436   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:46.087490   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.091828   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:46.091889   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:46.133625   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:46.133651   49120 cri.go:89] found id: ""
	I0213 23:17:46.133658   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:46.133710   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.138378   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:46.138456   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:46.181018   49120 cri.go:89] found id: ""
	I0213 23:17:46.181048   49120 logs.go:276] 0 containers: []
	W0213 23:17:46.181058   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:46.181065   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:46.181141   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:46.221347   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.221374   49120 cri.go:89] found id: ""
	I0213 23:17:46.221385   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:46.221448   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.226298   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:46.226331   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:46.268881   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:46.268915   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.325183   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:46.325225   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.372600   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:46.372637   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:46.791381   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:46.791438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:46.861239   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:46.861431   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:46.884969   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:46.885009   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:46.909324   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:46.909352   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:46.966664   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:46.966698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:47.030276   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:47.030321   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:47.081480   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:47.081516   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:47.238201   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:47.238238   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:47.285995   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:47.286033   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:47.332459   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332486   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:47.332566   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:47.332580   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:47.332596   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:47.332616   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332622   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:44.643384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.141032   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.953650   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:50.453421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.453501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:49.641373   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.142827   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:54.141398   49443 pod_ready.go:81] duration metric: took 4m0.007567399s waiting for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:54.141420   49443 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:54.141428   49443 pod_ready.go:38] duration metric: took 4m2.400127673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:54.141441   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:54.141464   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:54.141506   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:54.203295   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:54.203319   49443 cri.go:89] found id: ""
	I0213 23:17:54.203329   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:54.203387   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.208671   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:54.208748   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:54.254150   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:54.254183   49443 cri.go:89] found id: ""
	I0213 23:17:54.254193   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:54.254259   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.259090   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:54.259178   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:54.309365   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:54.309385   49443 cri.go:89] found id: ""
	I0213 23:17:54.309392   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:54.309436   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.315937   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:54.316014   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:54.363796   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.363855   49443 cri.go:89] found id: ""
	I0213 23:17:54.363866   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:54.363926   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.368767   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:54.368842   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:54.417590   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:54.417620   49443 cri.go:89] found id: ""
	I0213 23:17:54.417637   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:54.417696   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.422980   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:54.423053   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:54.468990   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.469019   49443 cri.go:89] found id: ""
	I0213 23:17:54.469029   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:54.469094   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.473989   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:54.474073   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:54.524124   49443 cri.go:89] found id: ""
	I0213 23:17:54.524154   49443 logs.go:276] 0 containers: []
	W0213 23:17:54.524164   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:54.524172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:54.524239   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.953845   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.459517   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.333824   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:57.351216   49120 api_server.go:72] duration metric: took 4m15.50672707s to wait for apiserver process to appear ...
	I0213 23:17:57.351245   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:57.351281   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:57.351340   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:57.405928   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:57.405956   49120 cri.go:89] found id: ""
	I0213 23:17:57.405963   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:57.406007   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.410541   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:57.410619   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:57.456843   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:57.456871   49120 cri.go:89] found id: ""
	I0213 23:17:57.456881   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:57.456940   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.461801   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:57.461852   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:57.504653   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.504690   49120 cri.go:89] found id: ""
	I0213 23:17:57.504702   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:57.504762   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.509177   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:57.509250   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:57.556672   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:57.556696   49120 cri.go:89] found id: ""
	I0213 23:17:57.556704   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:57.556747   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.561343   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:57.561399   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:57.606959   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:57.606994   49120 cri.go:89] found id: ""
	I0213 23:17:57.607005   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:57.607068   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.611356   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:57.611440   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:57.655205   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:57.655230   49120 cri.go:89] found id: ""
	I0213 23:17:57.655238   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:57.655284   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.659762   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:57.659850   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:57.699989   49120 cri.go:89] found id: ""
	I0213 23:17:57.700012   49120 logs.go:276] 0 containers: []
	W0213 23:17:57.700019   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:57.700028   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:57.700075   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.562654   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.562674   49443 cri.go:89] found id: ""
	I0213 23:17:54.562682   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:54.562745   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.567182   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:54.567209   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:54.666809   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:54.666847   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:54.818292   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:54.818324   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.878074   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:54.878108   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.938472   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:54.938509   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.985201   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:54.985235   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:54.999987   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:55.000016   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:55.058536   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:55.058573   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:55.108130   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:55.108172   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:55.154299   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:55.154327   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:55.205554   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:55.205583   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:55.615944   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:55.615987   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.179069   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:58.194968   49443 api_server.go:72] duration metric: took 4m8.888826635s to wait for apiserver process to appear ...
	I0213 23:17:58.194992   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:58.195020   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:58.195067   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:58.245997   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.246029   49443 cri.go:89] found id: ""
	I0213 23:17:58.246038   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:58.246103   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.251486   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:58.251566   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:58.299878   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:58.299909   49443 cri.go:89] found id: ""
	I0213 23:17:58.299919   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:58.299977   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.305075   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:58.305139   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:58.352587   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:58.352617   49443 cri.go:89] found id: ""
	I0213 23:17:58.352628   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:58.352688   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.357493   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:58.357576   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:58.412181   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.412203   49443 cri.go:89] found id: ""
	I0213 23:17:58.412211   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:58.412265   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.418852   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:58.418931   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:58.470881   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.470907   49443 cri.go:89] found id: ""
	I0213 23:17:58.470916   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:58.470970   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.476768   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:58.476851   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:58.548272   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:58.548293   49443 cri.go:89] found id: ""
	I0213 23:17:58.548301   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:58.548357   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.553380   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:58.553452   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:58.599623   49443 cri.go:89] found id: ""
	I0213 23:17:58.599652   49443 logs.go:276] 0 containers: []
	W0213 23:17:58.599663   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:58.599669   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:58.599725   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:58.647872   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.647896   49443 cri.go:89] found id: ""
	I0213 23:17:58.647906   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:58.647966   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.653015   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:58.653041   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.707958   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:58.708000   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.759975   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:58.760015   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.814801   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:58.814833   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.853782   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.853814   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:59.217806   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:59.217854   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:59.278255   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:59.278294   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:59.385496   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:59.385537   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:59.953729   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:02.454016   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.740739   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:57.740774   49120 cri.go:89] found id: ""
	I0213 23:17:57.740785   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:57.740839   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.745140   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:57.745163   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:57.758556   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:57.758604   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:57.900468   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:57.900507   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.945665   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:57.945693   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:58.003484   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:58.003521   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:58.048797   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:58.048826   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.096309   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:58.096347   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:58.173795   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.173990   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.196277   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:58.196306   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:58.266087   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:58.266129   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:58.325638   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:58.325676   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:58.372711   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:58.372752   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:58.444057   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.444097   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:58.830470   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830511   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:58.830572   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:58.830591   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.830600   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.830610   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830618   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:59.544056   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:59.544517   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:59.607033   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:59.607067   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:59.654534   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:59.654584   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:59.719274   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:59.719309   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:02.234489   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:18:02.240412   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:18:02.241675   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:02.241699   49443 api_server.go:131] duration metric: took 4.046700263s to wait for apiserver health ...
	I0213 23:18:02.241710   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:02.241735   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:02.241796   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:02.289133   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:02.289158   49443 cri.go:89] found id: ""
	I0213 23:18:02.289166   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:18:02.289212   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.295450   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:02.295527   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:02.342262   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:02.342285   49443 cri.go:89] found id: ""
	I0213 23:18:02.342292   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:18:02.342337   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.346810   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:02.346874   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:02.385638   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:02.385665   49443 cri.go:89] found id: ""
	I0213 23:18:02.385673   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:18:02.385725   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.389834   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:02.389920   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:02.435078   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:02.435110   49443 cri.go:89] found id: ""
	I0213 23:18:02.435121   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:18:02.435184   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.440237   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:02.440297   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:02.483869   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.483891   49443 cri.go:89] found id: ""
	I0213 23:18:02.483899   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:18:02.483942   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.490454   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:02.490532   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:02.540971   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:02.541000   49443 cri.go:89] found id: ""
	I0213 23:18:02.541010   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:18:02.541069   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.545818   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:02.545906   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:02.593132   49443 cri.go:89] found id: ""
	I0213 23:18:02.593159   49443 logs.go:276] 0 containers: []
	W0213 23:18:02.593166   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:02.593172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:02.593222   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:02.634979   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.635015   49443 cri.go:89] found id: ""
	I0213 23:18:02.635028   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:18:02.635089   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.640246   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:18:02.640274   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.681426   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:18:02.681458   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.721033   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:02.721062   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:03.049340   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:03.049385   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:18:03.154378   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:18:03.154417   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:03.215045   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:18:03.215081   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:03.260291   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:18:03.260320   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:03.323526   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:18:03.323565   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:03.378686   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:03.378731   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:03.406717   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:03.406742   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:03.547999   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:18:03.548035   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:03.593226   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:18:03.593255   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:06.160914   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:06.160954   49443 system_pods.go:61] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.160963   49443 system_pods.go:61] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.160970   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.160977   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.160996   49443 system_pods.go:61] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.161008   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.161018   49443 system_pods.go:61] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.161025   49443 system_pods.go:61] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.161035   49443 system_pods.go:74] duration metric: took 3.919318115s to wait for pod list to return data ...
	I0213 23:18:06.161046   49443 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:06.165231   49443 default_sa.go:45] found service account: "default"
	I0213 23:18:06.165262   49443 default_sa.go:55] duration metric: took 4.207834ms for default service account to be created ...
	I0213 23:18:06.165271   49443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:06.172453   49443 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:06.172488   49443 system_pods.go:89] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.172494   49443 system_pods.go:89] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.172499   49443 system_pods.go:89] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.172503   49443 system_pods.go:89] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.172507   49443 system_pods.go:89] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.172512   49443 system_pods.go:89] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.172517   49443 system_pods.go:89] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.172522   49443 system_pods.go:89] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.172531   49443 system_pods.go:126] duration metric: took 7.254871ms to wait for k8s-apps to be running ...
	I0213 23:18:06.172541   49443 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:06.172598   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:06.193026   49443 system_svc.go:56] duration metric: took 20.479072ms WaitForService to wait for kubelet.
	I0213 23:18:06.193051   49443 kubeadm.go:581] duration metric: took 4m16.886913912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:06.193072   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:06.196910   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:06.196940   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:06.196951   49443 node_conditions.go:105] duration metric: took 3.874223ms to run NodePressure ...
	I0213 23:18:06.196962   49443 start.go:228] waiting for startup goroutines ...
	I0213 23:18:06.196968   49443 start.go:233] waiting for cluster config update ...
	I0213 23:18:06.196977   49443 start.go:242] writing updated cluster config ...
	I0213 23:18:06.197233   49443 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:06.248295   49443 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:06.250392   49443 out.go:177] * Done! kubectl is now configured to use "embed-certs-340656" cluster and "default" namespace by default
	I0213 23:18:04.455358   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:06.953191   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.954115   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:10.954853   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.832437   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:18:08.838687   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:18:08.839999   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:18:08.840021   49120 api_server.go:131] duration metric: took 11.488768389s to wait for apiserver health ...
	I0213 23:18:08.840031   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:08.840058   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:08.840122   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:08.891532   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:08.891559   49120 cri.go:89] found id: ""
	I0213 23:18:08.891567   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:18:08.891618   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.896712   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:08.896802   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:08.943555   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:08.943584   49120 cri.go:89] found id: ""
	I0213 23:18:08.943593   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:18:08.943654   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.948658   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:08.948730   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:08.995867   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:08.995896   49120 cri.go:89] found id: ""
	I0213 23:18:08.995905   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:18:08.995970   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.000810   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:09.000883   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:09.046606   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.046636   49120 cri.go:89] found id: ""
	I0213 23:18:09.046646   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:18:09.046706   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.050924   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:09.050986   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:09.097414   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.097445   49120 cri.go:89] found id: ""
	I0213 23:18:09.097456   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:18:09.097525   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.102101   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:09.102177   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:09.164244   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.164267   49120 cri.go:89] found id: ""
	I0213 23:18:09.164274   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:18:09.164323   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.169164   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:09.169238   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:09.217068   49120 cri.go:89] found id: ""
	I0213 23:18:09.217094   49120 logs.go:276] 0 containers: []
	W0213 23:18:09.217101   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:09.217106   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:09.217174   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:09.256986   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.257017   49120 cri.go:89] found id: ""
	I0213 23:18:09.257028   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:18:09.257088   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.261602   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:18:09.261625   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.314910   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:18:09.314957   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.361576   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:18:09.361609   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.433243   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:18:09.433281   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.485648   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:09.485698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:09.634091   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:18:09.634127   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:09.681649   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:18:09.681689   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:09.729410   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:09.729449   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:10.100058   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:18:10.100104   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:10.156178   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:10.156209   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:10.229188   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.229358   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.251947   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:10.251987   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:10.268224   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:18:10.268251   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:10.319580   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319608   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:10.319651   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:18:10.319663   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.319673   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.319685   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319696   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:13.453597   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:15.445609   49715 pod_ready.go:81] duration metric: took 4m0.000451749s waiting for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	E0213 23:18:15.445643   49715 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:18:15.445653   49715 pod_ready.go:38] duration metric: took 4m2.428270702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:18:15.445670   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:18:15.445716   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:15.445773   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:15.501757   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:15.501791   49715 cri.go:89] found id: ""
	I0213 23:18:15.501802   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:15.501863   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.507658   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:15.507738   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:15.552164   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:15.552197   49715 cri.go:89] found id: ""
	I0213 23:18:15.552204   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:15.552257   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.557704   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:15.557764   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:15.606147   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:15.606168   49715 cri.go:89] found id: ""
	I0213 23:18:15.606175   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:15.606231   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.610863   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:15.610939   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:15.655298   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:15.655320   49715 cri.go:89] found id: ""
	I0213 23:18:15.655329   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:15.655387   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.660000   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:15.660062   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:15.699700   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:15.699735   49715 cri.go:89] found id: ""
	I0213 23:18:15.699745   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:15.699815   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.704535   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:15.704614   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:15.746999   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:15.747028   49715 cri.go:89] found id: ""
	I0213 23:18:15.747038   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:15.747091   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.752065   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:15.752137   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:15.793372   49715 cri.go:89] found id: ""
	I0213 23:18:15.793404   49715 logs.go:276] 0 containers: []
	W0213 23:18:15.793415   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:15.793422   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:15.793487   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:15.839630   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:15.839660   49715 cri.go:89] found id: ""
	I0213 23:18:15.839668   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:15.839723   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.844199   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:15.844225   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:15.904450   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:15.904479   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:15.925777   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:15.925805   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:16.079602   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:16.079634   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:16.121369   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:16.121400   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:16.174404   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:16.174440   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:16.216286   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:16.216321   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:16.629527   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:16.629564   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:16.708003   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.708235   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.729748   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:16.729784   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:16.784398   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:16.784432   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:16.829885   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:16.829923   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:16.872036   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:16.872066   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:16.937327   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937359   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:16.937411   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:16.937421   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.937431   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.937441   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937449   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:20.329462   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:20.329500   49120 system_pods.go:61] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.329508   49120 system_pods.go:61] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.329515   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.329521   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.329527   49120 system_pods.go:61] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.329533   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.329543   49120 system_pods.go:61] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.329550   49120 system_pods.go:61] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.329560   49120 system_pods.go:74] duration metric: took 11.489522059s to wait for pod list to return data ...
	I0213 23:18:20.329569   49120 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:20.332784   49120 default_sa.go:45] found service account: "default"
	I0213 23:18:20.332809   49120 default_sa.go:55] duration metric: took 3.233136ms for default service account to be created ...
	I0213 23:18:20.332817   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:20.339002   49120 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:20.339033   49120 system_pods.go:89] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.339042   49120 system_pods.go:89] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.339049   49120 system_pods.go:89] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.339056   49120 system_pods.go:89] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.339063   49120 system_pods.go:89] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.339070   49120 system_pods.go:89] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.339084   49120 system_pods.go:89] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.339093   49120 system_pods.go:89] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.339116   49120 system_pods.go:126] duration metric: took 6.292649ms to wait for k8s-apps to be running ...
	I0213 23:18:20.339125   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:20.339183   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:20.354459   49120 system_svc.go:56] duration metric: took 15.325743ms WaitForService to wait for kubelet.
	I0213 23:18:20.354488   49120 kubeadm.go:581] duration metric: took 4m38.510005999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:20.354505   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:20.358160   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:20.358186   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:20.358195   49120 node_conditions.go:105] duration metric: took 3.685402ms to run NodePressure ...
	I0213 23:18:20.358205   49120 start.go:228] waiting for startup goroutines ...
	I0213 23:18:20.358211   49120 start.go:233] waiting for cluster config update ...
	I0213 23:18:20.358220   49120 start.go:242] writing updated cluster config ...
	I0213 23:18:20.358527   49120 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:20.409811   49120 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 23:18:20.412251   49120 out.go:177] * Done! kubectl is now configured to use "no-preload-778731" cluster and "default" namespace by default
	I0213 23:18:26.939087   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:18:26.956231   49715 api_server.go:72] duration metric: took 4m16.268553955s to wait for apiserver process to appear ...
	I0213 23:18:26.956259   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:18:26.956317   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:26.956382   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:27.006428   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.006455   49715 cri.go:89] found id: ""
	I0213 23:18:27.006465   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:27.006527   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.011468   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:27.011542   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:27.054309   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.054334   49715 cri.go:89] found id: ""
	I0213 23:18:27.054344   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:27.054393   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.058925   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:27.058979   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:27.101942   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.101971   49715 cri.go:89] found id: ""
	I0213 23:18:27.101981   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:27.102041   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.107540   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:27.107609   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:27.152126   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.152150   49715 cri.go:89] found id: ""
	I0213 23:18:27.152157   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:27.152203   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.156537   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:27.156608   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:27.202931   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:27.202952   49715 cri.go:89] found id: ""
	I0213 23:18:27.202959   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:27.203006   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.209339   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:27.209405   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:27.250771   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:27.250814   49715 cri.go:89] found id: ""
	I0213 23:18:27.250828   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:27.250898   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.255547   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:27.255621   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:27.297645   49715 cri.go:89] found id: ""
	I0213 23:18:27.297679   49715 logs.go:276] 0 containers: []
	W0213 23:18:27.297689   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:27.297697   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:27.297765   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:27.340690   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.340719   49715 cri.go:89] found id: ""
	I0213 23:18:27.340728   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:27.340786   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.345308   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:27.345338   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:27.481620   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:27.481653   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.541421   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:27.541456   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.594527   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:27.594559   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.657323   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:27.657358   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.710198   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:27.710234   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.750419   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:27.750451   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:28.148333   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:28.148374   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:28.162927   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:28.162959   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:28.214802   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:28.214835   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:28.264035   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:28.264061   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:28.328849   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:28.328888   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:28.408683   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.408859   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429691   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429721   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:28.429772   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:28.429780   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.429787   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429793   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429798   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:38.431065   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:18:38.438496   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:18:38.440109   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:38.440131   49715 api_server.go:131] duration metric: took 11.483865303s to wait for apiserver health ...
	I0213 23:18:38.440139   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:38.440163   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:38.440218   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:38.485767   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:38.485791   49715 cri.go:89] found id: ""
	I0213 23:18:38.485798   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:38.485847   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.490804   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:38.490876   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:38.540174   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:38.540196   49715 cri.go:89] found id: ""
	I0213 23:18:38.540203   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:38.540247   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.545816   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:38.545904   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:38.593443   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:38.593466   49715 cri.go:89] found id: ""
	I0213 23:18:38.593474   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:38.593531   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.598567   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:38.598642   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:38.646508   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:38.646539   49715 cri.go:89] found id: ""
	I0213 23:18:38.646549   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:38.646605   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.651425   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:38.651489   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:38.695133   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:38.695157   49715 cri.go:89] found id: ""
	I0213 23:18:38.695166   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:38.695226   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.700446   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:38.700504   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:38.748214   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.748251   49715 cri.go:89] found id: ""
	I0213 23:18:38.748261   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:38.748319   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.753466   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:38.753532   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:38.796480   49715 cri.go:89] found id: ""
	I0213 23:18:38.796505   49715 logs.go:276] 0 containers: []
	W0213 23:18:38.796514   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:38.796521   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:38.796597   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:38.838145   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.838189   49715 cri.go:89] found id: ""
	I0213 23:18:38.838199   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:38.838259   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.844252   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:38.844279   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.919402   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:38.919442   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.963733   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:38.963767   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:39.013301   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:39.013336   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:39.142161   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:39.142192   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:39.199423   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:39.199455   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:39.245639   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:39.245669   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:39.290916   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:39.290954   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:39.343373   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:39.343405   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:39.700393   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:39.700441   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:39.777386   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.777564   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.800035   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:39.800087   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:39.817941   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:39.817972   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:39.870635   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870675   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:39.870733   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:39.870744   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.870749   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.870756   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870764   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:49.878184   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:49.878220   49715 system_pods.go:61] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.878229   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.878237   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.878244   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.878250   49715 system_pods.go:61] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.878256   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.878268   49715 system_pods.go:61] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.878276   49715 system_pods.go:61] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.878284   49715 system_pods.go:74] duration metric: took 11.438139039s to wait for pod list to return data ...
	I0213 23:18:49.878294   49715 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:49.881702   49715 default_sa.go:45] found service account: "default"
	I0213 23:18:49.881730   49715 default_sa.go:55] duration metric: took 3.42943ms for default service account to be created ...
	I0213 23:18:49.881741   49715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:49.888356   49715 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:49.888380   49715 system_pods.go:89] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.888385   49715 system_pods.go:89] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.888392   49715 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.888397   49715 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.888403   49715 system_pods.go:89] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.888409   49715 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.888422   49715 system_pods.go:89] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.888434   49715 system_pods.go:89] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.888446   49715 system_pods.go:126] duration metric: took 6.698139ms to wait for k8s-apps to be running ...
	I0213 23:18:49.888456   49715 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:49.888497   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:49.905396   49715 system_svc.go:56] duration metric: took 16.928016ms WaitForService to wait for kubelet.
	I0213 23:18:49.905427   49715 kubeadm.go:581] duration metric: took 4m39.217754888s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:49.905452   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:49.909261   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:49.909296   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:49.909312   49715 node_conditions.go:105] duration metric: took 3.854435ms to run NodePressure ...
	I0213 23:18:49.909326   49715 start.go:228] waiting for startup goroutines ...
	I0213 23:18:49.909334   49715 start.go:233] waiting for cluster config update ...
	I0213 23:18:49.909347   49715 start.go:242] writing updated cluster config ...
	I0213 23:18:49.909654   49715 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:49.961022   49715 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:49.963033   49715 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-083863" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:08:21 UTC, ends at Tue 2024-02-13 23:27:08 UTC. --
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.224079629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f25316f2-8101-4d5f-b5d9-b18a23fca12f name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.226114439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f8e0ff48-0136-4169-8ba2-6d1e77421aa7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.226997702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866828226976557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f8e0ff48-0136-4169-8ba2-6d1e77421aa7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.228259892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=79fb9e19-ecf2-4d63-9af9-6aacd1d99e85 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.228371635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=79fb9e19-ecf2-4d63-9af9-6aacd1d99e85 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.228634272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79fb9e19-ecf2-4d63-9af9-6aacd1d99e85 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.243625675Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=8ad41a63-99e3-4a15-bdb0-59a9cf4718b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.244058115Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bae1a46ded81dd8efd774d74f380bc1b8e4a2dd9c33e05e865b71b4bf77e498b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9vcz5,Uid:8df81e37-71b7-4220-9652-070538ce5a7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866031908309960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9vcz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df81e37-71b7-4220-9652-070538ce5a7f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:13:51.572451059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cdcb32e-024c-4055-b02f-807b7cc69b74,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866031838669120,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-13T23:13:51.503442090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&PodSandboxMetadata{Name:kube-proxy-4vgt5,Uid:456eb472-9014-4674-b03c-8e2a0997455b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866029516285795,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:13:48.873125769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-vrbjt,Ui
d:74c7f72d-10b1-467f-92ac-2888540bd3a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866029466215937,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:13:49.130674420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-340656,Uid:fe9b7248f5e11d263240042b6cccb18a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866007582037664,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fe9b7248f5e11d263240042b6cccb18a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fe9b7248f5e11d263240042b6cccb18a,kubernetes.io/config.seen: 2024-02-13T23:13:27.043611970Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-340656,Uid:65b418825c26a2b239b9b23b38957138,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866007577338551,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.56:8443,kubernetes.io/config.hash: 65b418825c26a2b239b9b23b38957138,kubernetes.io/config.seen: 2024-02-13T23:13:27.043610375Z,ku
bernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-340656,Uid:87fc2c43d84856cc722d882ffa68fd93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866007563365715,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fc2c43d84856cc722d882ffa68fd93,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87fc2c43d84856cc722d882ffa68fd93,kubernetes.io/config.seen: 2024-02-13T23:13:27.043613387Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-340656,Uid:4efe22c69fab880a31247949f69305fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:
1707866007521560402,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.56:2379,kubernetes.io/config.hash: 4efe22c69fab880a31247949f69305fe,kubernetes.io/config.seen: 2024-02-13T23:13:27.043603525Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=8ad41a63-99e3-4a15-bdb0-59a9cf4718b3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.244917640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6898acd3-c440-4fab-bcb8-56553095678c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.244993709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6898acd3-c440-4fab-bcb8-56553095678c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.245287171Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6898acd3-c440-4fab-bcb8-56553095678c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.285515413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=55b4dab3-8cac-4f44-854f-03dacb5e9e90 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.285632137Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=55b4dab3-8cac-4f44-854f-03dacb5e9e90 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.288451501Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3cf43960-6d9c-4a81-9172-54222bf9b15f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.289292573Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866828289269134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3cf43960-6d9c-4a81-9172-54222bf9b15f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.290441659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e50dbcc9-19ac-484f-8c2c-99929ea9a2db name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.290511470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e50dbcc9-19ac-484f-8c2c-99929ea9a2db name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.290818107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e50dbcc9-19ac-484f-8c2c-99929ea9a2db name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.343164963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c7f17e07-59f9-4dc9-ad01-692a59fc0e5d name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.343278854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c7f17e07-59f9-4dc9-ad01-692a59fc0e5d name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.345265545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c2b9017c-66c8-42d2-82d5-feb8f9c2e094 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.345957868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866828345926275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c2b9017c-66c8-42d2-82d5-feb8f9c2e094 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.347291768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=67e06582-62a0-4e52-a717-88450a84d41c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.347358961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=67e06582-62a0-4e52-a717-88450a84d41c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:08 embed-certs-340656 crio[712]: time="2024-02-13 23:27:08.347611367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=67e06582-62a0-4e52-a717-88450a84d41c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e4f1dbcd4edc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   302a7260a315b       storage-provisioner
	92a991060a144       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   7e450494066d6       kube-proxy-4vgt5
	5f131d6441857       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   fd873d3b7e951       coredns-5dd5756b68-vrbjt
	404d20f685e67       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   13 minutes ago      Running             kube-scheduler            2                   5a86bfa47c183       kube-scheduler-embed-certs-340656
	fadcdf769480f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   0ab26698d4d94       etcd-embed-certs-340656
	746971c6f43b8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   13 minutes ago      Running             kube-apiserver            2                   a4526e1366aae       kube-apiserver-embed-certs-340656
	59007ae81d380       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   13 minutes ago      Running             kube-controller-manager   2                   24a5970c34d86       kube-controller-manager-embed-certs-340656
	
	
	==> coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-340656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-340656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=embed-certs-340656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-340656
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:27:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:24:09 +0000   Tue, 13 Feb 2024 23:13:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:24:09 +0000   Tue, 13 Feb 2024 23:13:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:24:09 +0000   Tue, 13 Feb 2024 23:13:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:24:09 +0000   Tue, 13 Feb 2024 23:13:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.56
	  Hostname:    embed-certs-340656
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d053c755c4e49a894725a6234e23a06
	  System UUID:                0d053c75-5c4e-49a8-9472-5a6234e23a06
	  Boot ID:                    abe2c3cc-6972-474c-bc98-db199fdff60d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vrbjt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-340656                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-340656             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-340656    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-4vgt5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-340656             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-9vcz5               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-340656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-340656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-340656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-340656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-340656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-340656 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-340656 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-340656 status is now: NodeReady
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-340656 event: Registered Node embed-certs-340656 in Controller
	
	
	==> dmesg <==
	[Feb13 23:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070138] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.479729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.552448] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139457] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.532888] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.533340] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.107784] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.175273] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.127266] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.254494] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +17.822417] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[Feb13 23:09] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 23:13] systemd-fstab-generator[3471]: Ignoring "noauto" for root device
	[ +10.328581] systemd-fstab-generator[3793]: Ignoring "noauto" for root device
	[ +12.816645] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] <==
	{"level":"info","ts":"2024-02-13T23:13:30.590784Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c137f0a735fac174","initial-advertise-peer-urls":["https://192.168.61.56:2380"],"listen-peer-urls":["https://192.168.61.56:2380"],"advertise-client-urls":["https://192.168.61.56:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.56:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-13T23:13:30.590845Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T23:13:30.590947Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.56:2380"}
	{"level":"info","ts":"2024-02-13T23:13:30.590972Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.56:2380"}
	{"level":"info","ts":"2024-02-13T23:13:31.032915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:31.033011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:31.033038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 received MsgPreVoteResp from c137f0a735fac174 at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:31.033057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.033065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 received MsgVoteResp from c137f0a735fac174 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.033077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.033087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c137f0a735fac174 elected leader c137f0a735fac174 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.035196Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c137f0a735fac174","local-member-attributes":"{Name:embed-certs-340656 ClientURLs:[https://192.168.61.56:2379]}","request-path":"/0/members/c137f0a735fac174/attributes","cluster-id":"1232dcd2bbaf9bcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:13:31.03566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:31.036827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:31.037909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:13:31.038082Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:31.038145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:31.041251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.56:2379"}
	{"level":"info","ts":"2024-02-13T23:13:31.041542Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:31.04558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1232dcd2bbaf9bcb","local-member-id":"c137f0a735fac174","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:31.045846Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:31.04612Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:23:31.338223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-02-13T23:23:31.341344Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.279471ms","hash":2223224541}
	{"level":"info","ts":"2024-02-13T23:23:31.341536Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2223224541,"revision":714,"compact-revision":-1}
	
	
	==> kernel <==
	 23:27:08 up 18 min,  0 users,  load average: 0.44, 0.27, 0.21
	Linux embed-certs-340656 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] <==
	I0213 23:23:33.066025       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:23:34.066819       1 handler_proxy.go:93] no RequestInfo found in the context
	W0213 23:23:34.066867       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:23:34.067122       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:23:34.067150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0213 23:23:34.066980       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:23:34.069253       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:24:32.949691       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:24:34.067386       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:34.067564       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:24:34.067598       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:24:34.069872       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:34.069949       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:24:34.069984       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:25:32.950090       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 23:26:32.949337       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:26:34.068354       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:34.068576       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:26:34.068640       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:26:34.070642       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:34.070688       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:26:34.070697       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] <==
	I0213 23:21:18.683468       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:21:48.141904       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:21:48.693621       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:22:18.149052       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:22:18.702679       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:22:48.156238       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:22:48.712246       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:18.164934       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:18.722802       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:48.171794       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:48.732619       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:18.178140       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:18.742832       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:48.185620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:48.194971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="340.192µs"
	I0213 23:24:48.751812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0213 23:25:03.195399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="88.097µs"
	E0213 23:25:18.191545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:18.761600       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:25:48.197831       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:48.771812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:18.204292       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:18.785139       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:48.211095       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:48.794376       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] <==
	I0213 23:13:53.044464       1 server_others.go:69] "Using iptables proxy"
	I0213 23:13:53.062024       1 node.go:141] Successfully retrieved node IP: 192.168.61.56
	I0213 23:13:53.122899       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 23:13:53.122964       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 23:13:53.129146       1 server_others.go:152] "Using iptables Proxier"
	I0213 23:13:53.129635       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:13:53.130047       1 server.go:846] "Version info" version="v1.28.4"
	I0213 23:13:53.130156       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:13:53.132294       1 config.go:188] "Starting service config controller"
	I0213 23:13:53.132512       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:13:53.132960       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:13:53.133115       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:13:53.134347       1 config.go:315] "Starting node config controller"
	I0213 23:13:53.134399       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:13:53.233993       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:13:53.234291       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:13:53.235212       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] <==
	W0213 23:13:34.011889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:34.011920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:34.116429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:13:34.116675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:13:34.287080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:13:34.287138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:13:34.290480       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:13:34.290509       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:13:34.298035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:34.298089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:34.326576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:13:34.326679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 23:13:34.393220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:13:34.393453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:13:34.445063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:13:34.445194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:13:34.469018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:34.469338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:34.508641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:13:34.508839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:13:34.511142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 23:13:34.511283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:13:34.544122       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:13:34.544474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0213 23:13:36.195413       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:08:21 UTC, ends at Tue 2024-02-13 23:27:08 UTC. --
	Feb 13 23:24:37 embed-certs-340656 kubelet[3800]: E0213 23:24:37.198280    3800 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-9q7k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9vcz5_kube-system(8df81e37-71b7-4220-9652-070538ce5a7f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:24:37 embed-certs-340656 kubelet[3800]: E0213 23:24:37.198496    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:24:37 embed-certs-340656 kubelet[3800]: E0213 23:24:37.304415    3800 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:24:37 embed-certs-340656 kubelet[3800]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:24:37 embed-certs-340656 kubelet[3800]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:24:37 embed-certs-340656 kubelet[3800]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:24:48 embed-certs-340656 kubelet[3800]: E0213 23:24:48.175000    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:25:03 embed-certs-340656 kubelet[3800]: E0213 23:25:03.174958    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:25:17 embed-certs-340656 kubelet[3800]: E0213 23:25:17.175106    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:25:31 embed-certs-340656 kubelet[3800]: E0213 23:25:31.176534    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:25:37 embed-certs-340656 kubelet[3800]: E0213 23:25:37.304926    3800 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:25:37 embed-certs-340656 kubelet[3800]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:25:37 embed-certs-340656 kubelet[3800]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:25:37 embed-certs-340656 kubelet[3800]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:25:44 embed-certs-340656 kubelet[3800]: E0213 23:25:44.174957    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:25:55 embed-certs-340656 kubelet[3800]: E0213 23:25:55.174569    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:26:09 embed-certs-340656 kubelet[3800]: E0213 23:26:09.175469    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:26:24 embed-certs-340656 kubelet[3800]: E0213 23:26:24.175000    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:26:35 embed-certs-340656 kubelet[3800]: E0213 23:26:35.174990    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:26:37 embed-certs-340656 kubelet[3800]: E0213 23:26:37.306827    3800 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:26:37 embed-certs-340656 kubelet[3800]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:26:37 embed-certs-340656 kubelet[3800]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:26:37 embed-certs-340656 kubelet[3800]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:26:50 embed-certs-340656 kubelet[3800]: E0213 23:26:50.175154    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:27:02 embed-certs-340656 kubelet[3800]: E0213 23:27:02.174941    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	
	
	==> storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] <==
	I0213 23:13:52.920419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:13:52.969691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:13:52.970011       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:13:52.991541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:13:52.992507       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-340656_88f9f32e-e538-4fc2-8364-6c8f32ca6876!
	I0213 23:13:52.996145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6771fed6-6360-43c6-8cc5-5fae0fde2cc2", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-340656_88f9f32e-e538-4fc2-8364-6c8f32ca6876 became leader
	I0213 23:13:53.093475       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-340656_88f9f32e-e538-4fc2-8364-6c8f32ca6876!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-340656 -n embed-certs-340656
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-340656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9vcz5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-340656 describe pod metrics-server-57f55c9bc5-9vcz5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-340656 describe pod metrics-server-57f55c9bc5-9vcz5: exit status 1 (76.420125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9vcz5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-340656 describe pod metrics-server-57f55c9bc5-9vcz5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778731 -n no-preload-778731
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:27:21.028945389 +0000 UTC m=+5460.643719303
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-778731 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-778731 logs -n 25: (1.824214093s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 23:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-245122        | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:05:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:05:02.640377   49715 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:05:02.640501   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640509   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:05:02.640513   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640736   49715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:05:02.641321   49715 out.go:298] Setting JSON to false
	I0213 23:05:02.642273   49715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6454,"bootTime":1707859049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:05:02.642347   49715 start.go:138] virtualization: kvm guest
	I0213 23:05:02.645098   49715 out.go:177] * [default-k8s-diff-port-083863] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:05:02.646964   49715 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:05:02.646970   49715 notify.go:220] Checking for updates...
	I0213 23:05:02.648511   49715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:05:02.650105   49715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:05:02.651715   49715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:05:02.653359   49715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:05:02.655095   49715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:05:02.657048   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:05:02.657426   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.657495   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.672324   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0213 23:05:02.672730   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.673260   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.673290   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.673647   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.673817   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.674096   49715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:05:02.674432   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.674472   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.688915   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0213 23:05:02.689349   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.689790   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.689816   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.690223   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.690421   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.727324   49715 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 23:05:02.728797   49715 start.go:298] selected driver: kvm2
	I0213 23:05:02.728815   49715 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.728927   49715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:05:02.729600   49715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.729674   49715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:05:02.745692   49715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:05:02.746106   49715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:05:02.746172   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:05:02.746187   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:05:02.746199   49715 start_flags.go:321] config:
	{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-08386
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.746779   49715 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.748860   49715 out.go:177] * Starting control plane node default-k8s-diff-port-083863 in cluster default-k8s-diff-port-083863
	I0213 23:05:02.750290   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:05:02.750326   49715 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:05:02.750333   49715 cache.go:56] Caching tarball of preloaded images
	I0213 23:05:02.750421   49715 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:05:02.750463   49715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:05:02.750576   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:05:02.750762   49715 start.go:365] acquiring machines lock for default-k8s-diff-port-083863: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:05:07.158187   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:10.230150   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:16.310133   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:19.382235   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:25.462139   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:28.534229   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:34.614137   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:37.686165   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:43.766206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:46.838168   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:52.918134   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:55.990211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:02.070192   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:05.142167   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:11.222152   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:14.294088   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:20.374194   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:23.446217   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:29.526175   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:32.598147   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:38.678146   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:41.750169   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:47.830142   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:50.902206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:56.982180   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:00.054195   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:06.134182   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:09.206215   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:15.286248   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:18.358211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:24.438162   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:27.510191   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:33.590177   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:36.662174   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:42.742237   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:45.814203   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:48.818472   49120 start.go:369] acquired machines lock for "no-preload-778731" in 4m31.005837415s
	I0213 23:07:48.818529   49120 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:07:48.818538   49120 fix.go:54] fixHost starting: 
	I0213 23:07:48.818916   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:07:48.818948   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:07:48.833483   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 23:07:48.833925   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:07:48.834425   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:07:48.834452   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:07:48.834778   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:07:48.835000   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:07:48.835155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:07:48.836889   49120 fix.go:102] recreateIfNeeded on no-preload-778731: state=Stopped err=<nil>
	I0213 23:07:48.836930   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	W0213 23:07:48.837148   49120 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:07:48.840033   49120 out.go:177] * Restarting existing kvm2 VM for "no-preload-778731" ...
	I0213 23:07:48.816416   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:07:48.816456   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:07:48.818324   49036 machine.go:91] provisioned docker machine in 4m37.408860809s
	I0213 23:07:48.818361   49036 fix.go:56] fixHost completed within 4m37.431023423s
	I0213 23:07:48.818366   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 4m37.431037395s
	W0213 23:07:48.818389   49036 start.go:694] error starting host: provision: host is not running
	W0213 23:07:48.818527   49036 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0213 23:07:48.818541   49036 start.go:709] Will try again in 5 seconds ...
	I0213 23:07:48.841324   49120 main.go:141] libmachine: (no-preload-778731) Calling .Start
	I0213 23:07:48.841532   49120 main.go:141] libmachine: (no-preload-778731) Ensuring networks are active...
	I0213 23:07:48.842327   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network default is active
	I0213 23:07:48.842678   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network mk-no-preload-778731 is active
	I0213 23:07:48.843032   49120 main.go:141] libmachine: (no-preload-778731) Getting domain xml...
	I0213 23:07:48.843852   49120 main.go:141] libmachine: (no-preload-778731) Creating domain...
	I0213 23:07:50.042665   49120 main.go:141] libmachine: (no-preload-778731) Waiting to get IP...
	I0213 23:07:50.043679   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.044091   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.044189   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.044069   50144 retry.go:31] will retry after 251.949505ms: waiting for machine to come up
	I0213 23:07:50.297817   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.298535   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.298567   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.298493   50144 retry.go:31] will retry after 319.494876ms: waiting for machine to come up
	I0213 23:07:50.620050   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.620443   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.620468   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.620395   50144 retry.go:31] will retry after 308.031117ms: waiting for machine to come up
	I0213 23:07:50.929942   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.930361   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.930391   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.930309   50144 retry.go:31] will retry after 513.800078ms: waiting for machine to come up
	I0213 23:07:51.446223   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:51.446875   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:51.446904   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:51.446813   50144 retry.go:31] will retry after 592.80917ms: waiting for machine to come up
	I0213 23:07:52.042126   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.042542   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.042573   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.042519   50144 retry.go:31] will retry after 688.102963ms: waiting for machine to come up
	I0213 23:07:53.818751   49036 start.go:365] acquiring machines lock for old-k8s-version-245122: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:07:52.732194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.732576   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.732602   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.732538   50144 retry.go:31] will retry after 1.143041451s: waiting for machine to come up
	I0213 23:07:53.877287   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:53.877661   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:53.877687   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:53.877624   50144 retry.go:31] will retry after 918.528315ms: waiting for machine to come up
	I0213 23:07:54.797760   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:54.798287   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:54.798314   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:54.798252   50144 retry.go:31] will retry after 1.679404533s: waiting for machine to come up
	I0213 23:07:56.479283   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:56.479853   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:56.479880   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:56.479785   50144 retry.go:31] will retry after 1.510596076s: waiting for machine to come up
	I0213 23:07:57.992757   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:57.993320   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:57.993352   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:57.993274   50144 retry.go:31] will retry after 2.041602638s: waiting for machine to come up
	I0213 23:08:00.036654   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:00.037130   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:00.037162   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:00.037075   50144 retry.go:31] will retry after 3.403460211s: waiting for machine to come up
	I0213 23:08:03.444689   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:03.445152   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:03.445176   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:03.445088   50144 retry.go:31] will retry after 4.270182412s: waiting for machine to come up
	I0213 23:08:09.107106   49443 start.go:369] acquired machines lock for "embed-certs-340656" in 3m54.456203319s
	I0213 23:08:09.107175   49443 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:09.107194   49443 fix.go:54] fixHost starting: 
	I0213 23:08:09.107647   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:09.107696   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:09.124314   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0213 23:08:09.124675   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:09.125131   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:08:09.125153   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:09.125509   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:09.125705   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:09.125898   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:08:09.127641   49443 fix.go:102] recreateIfNeeded on embed-certs-340656: state=Stopped err=<nil>
	I0213 23:08:09.127661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	W0213 23:08:09.127830   49443 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:09.130334   49443 out.go:177] * Restarting existing kvm2 VM for "embed-certs-340656" ...
	I0213 23:08:09.132354   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Start
	I0213 23:08:09.132546   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring networks are active...
	I0213 23:08:09.133391   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network default is active
	I0213 23:08:09.133758   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network mk-embed-certs-340656 is active
	I0213 23:08:09.134160   49443 main.go:141] libmachine: (embed-certs-340656) Getting domain xml...
	I0213 23:08:09.134954   49443 main.go:141] libmachine: (embed-certs-340656) Creating domain...
	I0213 23:08:07.719971   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.720520   49120 main.go:141] libmachine: (no-preload-778731) Found IP for machine: 192.168.83.31
	I0213 23:08:07.720541   49120 main.go:141] libmachine: (no-preload-778731) Reserving static IP address...
	I0213 23:08:07.720559   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has current primary IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.721043   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.721071   49120 main.go:141] libmachine: (no-preload-778731) DBG | skip adding static IP to network mk-no-preload-778731 - found existing host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"}
	I0213 23:08:07.721086   49120 main.go:141] libmachine: (no-preload-778731) Reserved static IP address: 192.168.83.31
	I0213 23:08:07.721105   49120 main.go:141] libmachine: (no-preload-778731) DBG | Getting to WaitForSSH function...
	I0213 23:08:07.721120   49120 main.go:141] libmachine: (no-preload-778731) Waiting for SSH to be available...
	I0213 23:08:07.723769   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724343   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.724370   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724485   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH client type: external
	I0213 23:08:07.724515   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa (-rw-------)
	I0213 23:08:07.724552   49120 main.go:141] libmachine: (no-preload-778731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:07.724577   49120 main.go:141] libmachine: (no-preload-778731) DBG | About to run SSH command:
	I0213 23:08:07.724605   49120 main.go:141] libmachine: (no-preload-778731) DBG | exit 0
	I0213 23:08:07.823050   49120 main.go:141] libmachine: (no-preload-778731) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:07.823504   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetConfigRaw
	I0213 23:08:07.824155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:07.826730   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827237   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.827277   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827608   49120 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/config.json ...
	I0213 23:08:07.827851   49120 machine.go:88] provisioning docker machine ...
	I0213 23:08:07.827877   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:07.828112   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828416   49120 buildroot.go:166] provisioning hostname "no-preload-778731"
	I0213 23:08:07.828464   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828745   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.832015   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832438   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.832477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832698   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.832929   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833125   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833288   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.833480   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.833828   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.833845   49120 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-778731 && echo "no-preload-778731" | sudo tee /etc/hostname
	I0213 23:08:07.979041   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-778731
	
	I0213 23:08:07.979079   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.982378   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982755   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.982783   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982982   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.983137   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983346   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983462   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.983600   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.983946   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.983967   49120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778731/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:08.122610   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:08.122641   49120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:08.122657   49120 buildroot.go:174] setting up certificates
	I0213 23:08:08.122666   49120 provision.go:83] configureAuth start
	I0213 23:08:08.122674   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:08.122935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:08.125641   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126016   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.126046   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126205   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.128441   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128736   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.128780   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128918   49120 provision.go:138] copyHostCerts
	I0213 23:08:08.128984   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:08.128997   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:08.129067   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:08.129198   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:08.129211   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:08.129248   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:08.129321   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:08.129335   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:08.129373   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:08.129443   49120 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.no-preload-778731 san=[192.168.83.31 192.168.83.31 localhost 127.0.0.1 minikube no-preload-778731]
	I0213 23:08:08.326156   49120 provision.go:172] copyRemoteCerts
	I0213 23:08:08.326234   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:08.326263   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.329373   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.329952   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.329986   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.330257   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.330447   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.330599   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.330737   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.423570   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:08.447689   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:08.472766   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:08:08.496594   49120 provision.go:86] duration metric: configureAuth took 373.917105ms
	I0213 23:08:08.496623   49120 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:08.496815   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:08:08.496899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.499464   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499771   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.499801   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.500116   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500284   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500459   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.500651   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.500962   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.500981   49120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:08.828899   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:08.828935   49120 machine.go:91] provisioned docker machine in 1.001067662s
	I0213 23:08:08.828948   49120 start.go:300] post-start starting for "no-preload-778731" (driver="kvm2")
	I0213 23:08:08.828966   49120 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:08.828987   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:08.829378   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:08.829401   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.831985   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832340   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.832365   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832498   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.832717   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.832873   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.833022   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.930192   49120 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:08.934633   49120 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:08.934660   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:08.934723   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:08.934804   49120 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:08.934893   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:08.945400   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:08.973850   49120 start.go:303] post-start completed in 144.888108ms
	I0213 23:08:08.973894   49120 fix.go:56] fixHost completed within 20.155355472s
	I0213 23:08:08.973917   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.976477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976799   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.976831   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976990   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.977177   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977358   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977513   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.977664   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.978069   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.978082   49120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:09.106952   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865689.053803664
	
	I0213 23:08:09.106977   49120 fix.go:206] guest clock: 1707865689.053803664
	I0213 23:08:09.106984   49120 fix.go:219] Guest: 2024-02-13 23:08:09.053803664 +0000 UTC Remote: 2024-02-13 23:08:08.973898202 +0000 UTC m=+291.312557253 (delta=79.905462ms)
	I0213 23:08:09.107004   49120 fix.go:190] guest clock delta is within tolerance: 79.905462ms
	I0213 23:08:09.107011   49120 start.go:83] releasing machines lock for "no-preload-778731", held for 20.288505954s
	I0213 23:08:09.107046   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.107372   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:09.110226   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110592   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.110623   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110795   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111368   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111531   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111622   49120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:09.111662   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.113712   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.114053   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.114096   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.117964   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.118031   49120 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:09.118065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.118167   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.118318   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.118615   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.120610   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121054   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.121088   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121290   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.121461   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.121627   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.121770   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.234065   49120 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:09.240751   49120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:09.393966   49120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:09.401672   49120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:09.401767   49120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:09.426073   49120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:09.426099   49120 start.go:475] detecting cgroup driver to use...
	I0213 23:08:09.426172   49120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:09.446114   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:09.461330   49120 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:09.461404   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:09.475964   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:09.490801   49120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:09.621898   49120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:09.747413   49120 docker.go:233] disabling docker service ...
	I0213 23:08:09.747470   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:09.766642   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:09.783116   49120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:09.910634   49120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:10.052181   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:10.066413   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:10.089436   49120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:10.089505   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.100366   49120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:10.100453   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.111681   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.122231   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.132945   49120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:10.146287   49120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:10.156405   49120 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:10.156481   49120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:10.172152   49120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:10.182862   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:10.315633   49120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:10.509774   49120 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:10.509878   49120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:10.514924   49120 start.go:543] Will wait 60s for crictl version
	I0213 23:08:10.515016   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.518898   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:10.558596   49120 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:10.558695   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.611876   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.664604   49120 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:08:10.665908   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:10.669029   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669393   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:10.669442   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669676   49120 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:10.673975   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:10.686760   49120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:08:10.686830   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:10.730784   49120 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:08:10.730813   49120 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:08:10.730900   49120 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.730903   49120 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.730909   49120 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.730914   49120 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.731026   49120 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.731083   49120 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.731131   49120 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.731497   49120 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732506   49120 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.732511   49120 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.732513   49120 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.732543   49120 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732577   49120 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.732597   49120 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.732719   49120 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.732759   49120 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.880038   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.891830   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0213 23:08:10.905668   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.930079   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.940850   49120 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0213 23:08:10.940894   49120 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.940941   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.942664   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.985299   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.011467   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.040720   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.099497   49120 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0213 23:08:11.099544   49120 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0213 23:08:11.099577   49120 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.099614   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:11.099636   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099651   49120 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0213 23:08:11.099683   49120 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.099711   49120 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0213 23:08:11.099740   49120 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.099746   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099760   49120 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0213 23:08:11.099767   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099782   49120 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.099547   49120 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.099901   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099916   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.107567   49120 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0213 23:08:11.107614   49120 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.107675   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.119038   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.157701   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.157799   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.157722   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.157768   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.157830   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.157919   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0213 23:08:11.158002   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.200990   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0213 23:08:11.201117   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:11.299985   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.300039   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 23:08:11.300041   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300130   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:11.300137   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300148   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0213 23:08:11.300163   49120 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300198   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300209   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300216   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0213 23:08:11.300203   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300098   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300293   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300096   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.318252   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0213 23:08:11.318307   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318355   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318520   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0213 23:08:10.406170   49443 main.go:141] libmachine: (embed-certs-340656) Waiting to get IP...
	I0213 23:08:10.407139   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.407616   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.407692   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.407598   50262 retry.go:31] will retry after 193.299479ms: waiting for machine to come up
	I0213 23:08:10.603143   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.603673   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.603696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.603627   50262 retry.go:31] will retry after 369.099644ms: waiting for machine to come up
	I0213 23:08:10.974421   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.974922   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.974953   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.974870   50262 retry.go:31] will retry after 418.956642ms: waiting for machine to come up
	I0213 23:08:11.395489   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:11.395974   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:11.396005   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:11.395937   50262 retry.go:31] will retry after 610.320769ms: waiting for machine to come up
	I0213 23:08:12.007695   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.008167   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.008198   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.008115   50262 retry.go:31] will retry after 624.461953ms: waiting for machine to come up
	I0213 23:08:12.634088   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.634519   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.634552   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.634467   50262 retry.go:31] will retry after 903.217503ms: waiting for machine to come up
	I0213 23:08:13.539114   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:13.539683   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:13.539725   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:13.539611   50262 retry.go:31] will retry after 747.647967ms: waiting for machine to come up
	I0213 23:08:14.288632   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:14.289301   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:14.289338   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:14.289236   50262 retry.go:31] will retry after 1.415080779s: waiting for machine to come up
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.810648669s)
	I0213 23:08:15.110937   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.810587707s)
	I0213 23:08:15.110961   49120 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:15.110969   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0213 23:08:15.111009   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:17.178104   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067071549s)
	I0213 23:08:17.178130   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0213 23:08:17.178156   49120 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:17.178204   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:15.706329   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:15.706863   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:15.706901   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:15.706769   50262 retry.go:31] will retry after 1.500671136s: waiting for machine to come up
	I0213 23:08:17.209706   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:17.210252   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:17.210278   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:17.210198   50262 retry.go:31] will retry after 1.743342291s: waiting for machine to come up
	I0213 23:08:18.956397   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:18.956934   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:18.956971   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:18.956874   50262 retry.go:31] will retry after 2.095777111s: waiting for machine to come up
	I0213 23:08:18.227625   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.049388261s)
	I0213 23:08:18.227663   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 23:08:18.227691   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:18.227756   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:21.120783   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.892997016s)
	I0213 23:08:21.120823   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0213 23:08:21.120854   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.120908   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.055630   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:21.056028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:21.056106   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:21.056004   50262 retry.go:31] will retry after 3.144708692s: waiting for machine to come up
	I0213 23:08:24.202158   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:24.202562   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:24.202584   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:24.202515   50262 retry.go:31] will retry after 3.072407019s: waiting for machine to come up
	I0213 23:08:23.793772   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.672817599s)
	I0213 23:08:23.793813   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0213 23:08:23.793841   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:23.793916   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:25.866352   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.072399119s)
	I0213 23:08:25.866388   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0213 23:08:25.866422   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:25.866469   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:27.315469   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.44897051s)
	I0213 23:08:27.315502   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0213 23:08:27.315534   49120 cache_images.go:123] Successfully loaded all cached images
	I0213 23:08:27.315540   49120 cache_images.go:92] LoadImages completed in 16.584715329s
	I0213 23:08:27.315650   49120 ssh_runner.go:195] Run: crio config
	I0213 23:08:27.383180   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:27.383203   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:27.383224   49120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:27.383249   49120 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.31 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778731 NodeName:no-preload-778731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:27.383445   49120 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778731"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:27.383545   49120 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-778731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:27.383606   49120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:08:27.393312   49120 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:27.393384   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:27.401513   49120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0213 23:08:27.419705   49120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:08:27.439236   49120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0213 23:08:27.457026   49120 ssh_runner.go:195] Run: grep 192.168.83.31	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:27.461679   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:27.474701   49120 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731 for IP: 192.168.83.31
	I0213 23:08:27.474740   49120 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:27.474922   49120 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:27.474966   49120 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:27.475042   49120 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.key
	I0213 23:08:27.475102   49120 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key.049c2370
	I0213 23:08:27.475138   49120 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key
	I0213 23:08:27.475241   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:27.475271   49120 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:27.475281   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:27.475305   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:27.475326   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:27.475360   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:27.475401   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:27.475997   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:27.500212   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:27.526078   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:27.552892   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:27.579169   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:27.603962   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:27.628862   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:27.653046   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:27.681039   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:27.708026   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:28.658782   49715 start.go:369] acquired machines lock for "default-k8s-diff-port-083863" in 3m25.907988779s
	I0213 23:08:28.658844   49715 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:28.658851   49715 fix.go:54] fixHost starting: 
	I0213 23:08:28.659235   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:28.659276   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:28.677314   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0213 23:08:28.677718   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:28.678315   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:08:28.678355   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:28.678727   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:28.678935   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:28.679109   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:08:28.680868   49715 fix.go:102] recreateIfNeeded on default-k8s-diff-port-083863: state=Stopped err=<nil>
	I0213 23:08:28.680915   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	W0213 23:08:28.681100   49715 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:28.683083   49715 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-083863" ...
	I0213 23:08:27.278610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279033   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has current primary IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279068   49443 main.go:141] libmachine: (embed-certs-340656) Found IP for machine: 192.168.61.56
	I0213 23:08:27.279085   49443 main.go:141] libmachine: (embed-certs-340656) Reserving static IP address...
	I0213 23:08:27.279524   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.279553   49443 main.go:141] libmachine: (embed-certs-340656) Reserved static IP address: 192.168.61.56
	I0213 23:08:27.279572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | skip adding static IP to network mk-embed-certs-340656 - found existing host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"}
	I0213 23:08:27.279592   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Getting to WaitForSSH function...
	I0213 23:08:27.279609   49443 main.go:141] libmachine: (embed-certs-340656) Waiting for SSH to be available...
	I0213 23:08:27.282041   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282383   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.282417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282517   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH client type: external
	I0213 23:08:27.282548   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa (-rw-------)
	I0213 23:08:27.282582   49443 main.go:141] libmachine: (embed-certs-340656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:27.282598   49443 main.go:141] libmachine: (embed-certs-340656) DBG | About to run SSH command:
	I0213 23:08:27.282688   49443 main.go:141] libmachine: (embed-certs-340656) DBG | exit 0
	I0213 23:08:27.374230   49443 main.go:141] libmachine: (embed-certs-340656) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:27.374589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetConfigRaw
	I0213 23:08:27.375330   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.378273   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378648   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.378682   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378917   49443 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/config.json ...
	I0213 23:08:27.379092   49443 machine.go:88] provisioning docker machine ...
	I0213 23:08:27.379109   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:27.379298   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379491   49443 buildroot.go:166] provisioning hostname "embed-certs-340656"
	I0213 23:08:27.379521   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379667   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.382028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382351   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.382404   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382562   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.382728   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.382880   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.383023   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.383213   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.383662   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.383682   49443 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-340656 && echo "embed-certs-340656" | sudo tee /etc/hostname
	I0213 23:08:27.526044   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-340656
	
	I0213 23:08:27.526075   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.529185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529526   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.529556   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529660   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.529852   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530056   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530203   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.530356   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.530695   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.530725   49443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-340656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-340656/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-340656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:27.664926   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:27.664966   49443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:27.664993   49443 buildroot.go:174] setting up certificates
	I0213 23:08:27.665004   49443 provision.go:83] configureAuth start
	I0213 23:08:27.665019   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.665429   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.668520   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.668912   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.668937   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.669172   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.671996   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672365   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.672411   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672620   49443 provision.go:138] copyHostCerts
	I0213 23:08:27.672684   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:27.672706   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:27.672778   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:27.672914   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:27.672929   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:27.672966   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:27.673049   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:27.673060   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:27.673089   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:27.673187   49443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.embed-certs-340656 san=[192.168.61.56 192.168.61.56 localhost 127.0.0.1 minikube embed-certs-340656]
	I0213 23:08:27.924954   49443 provision.go:172] copyRemoteCerts
	I0213 23:08:27.925011   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:27.925033   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.928037   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928376   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.928410   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928588   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.928779   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.928960   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.929085   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.019335   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:28.043949   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0213 23:08:28.066824   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:08:28.089010   49443 provision.go:86] duration metric: configureAuth took 423.986916ms
	I0213 23:08:28.089043   49443 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:28.089251   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:28.089316   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.091655   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.091955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.091984   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.092151   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.092310   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092440   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092553   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.092694   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.092999   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.093014   49443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:28.402931   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:28.402953   49443 machine.go:91] provisioned docker machine in 1.023849221s
	I0213 23:08:28.402962   49443 start.go:300] post-start starting for "embed-certs-340656" (driver="kvm2")
	I0213 23:08:28.402972   49443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:28.402986   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.403246   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:28.403266   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.405815   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.406201   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406331   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.406514   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.406703   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.406867   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.500638   49443 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:28.504820   49443 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:28.504839   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:28.504899   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:28.504967   49443 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:28.505051   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:28.514593   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:28.536607   49443 start.go:303] post-start completed in 133.632311ms
	I0213 23:08:28.536653   49443 fix.go:56] fixHost completed within 19.429451259s
	I0213 23:08:28.536673   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.539355   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539715   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.539739   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539914   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.540115   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540275   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540420   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.540581   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.540917   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.540932   49443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:28.658649   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865708.631208852
	
	I0213 23:08:28.658674   49443 fix.go:206] guest clock: 1707865708.631208852
	I0213 23:08:28.658682   49443 fix.go:219] Guest: 2024-02-13 23:08:28.631208852 +0000 UTC Remote: 2024-02-13 23:08:28.536657964 +0000 UTC m=+254.042699377 (delta=94.550888ms)
	I0213 23:08:28.658701   49443 fix.go:190] guest clock delta is within tolerance: 94.550888ms
	I0213 23:08:28.658707   49443 start.go:83] releasing machines lock for "embed-certs-340656", held for 19.551560323s
	I0213 23:08:28.658730   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.658982   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:28.662069   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662449   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.662480   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662651   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663245   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663430   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663521   49443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:28.663567   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.663688   49443 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:28.663712   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.666417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666867   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.666900   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667039   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.667185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667234   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667418   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667467   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667518   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.667589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667736   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.782794   49443 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:28.788743   49443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:28.933478   49443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:28.940543   49443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:28.940632   49443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:28.958972   49443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:28.958994   49443 start.go:475] detecting cgroup driver to use...
	I0213 23:08:28.959084   49443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:28.977833   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:28.996142   49443 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:28.996205   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:29.015509   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:29.029839   49443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:29.140405   49443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:29.265524   49443 docker.go:233] disabling docker service ...
	I0213 23:08:29.265597   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:29.283479   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:29.300116   49443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:29.428731   49443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:29.555072   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:29.569803   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:29.589259   49443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:29.589329   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.600653   49443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:29.600732   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.612313   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.624637   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.636279   49443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:29.648496   49443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:29.658957   49443 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:29.659020   49443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:29.673605   49443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:29.684589   49443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:29.800899   49443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:29.989345   49443 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:29.989423   49443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:29.995420   49443 start.go:543] Will wait 60s for crictl version
	I0213 23:08:29.995489   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:08:30.000012   49443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:30.047026   49443 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:30.047114   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.095456   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.146027   49443 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:28.684576   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Start
	I0213 23:08:28.684757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring networks are active...
	I0213 23:08:28.685582   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network default is active
	I0213 23:08:28.685942   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network mk-default-k8s-diff-port-083863 is active
	I0213 23:08:28.686429   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Getting domain xml...
	I0213 23:08:28.687208   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Creating domain...
	I0213 23:08:30.003148   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting to get IP...
	I0213 23:08:30.004175   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004634   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004725   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.004599   50394 retry.go:31] will retry after 210.109414ms: waiting for machine to come up
	I0213 23:08:30.215983   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216407   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216439   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.216359   50394 retry.go:31] will retry after 367.743906ms: waiting for machine to come up
	I0213 23:08:30.586081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586629   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586663   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.586583   50394 retry.go:31] will retry after 342.736609ms: waiting for machine to come up
	I0213 23:08:30.931248   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931707   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931738   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.931656   50394 retry.go:31] will retry after 597.326691ms: waiting for machine to come up
	I0213 23:08:31.530395   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530818   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530848   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:31.530767   50394 retry.go:31] will retry after 749.518323ms: waiting for machine to come up
	I0213 23:08:32.281688   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282102   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282138   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:32.282052   50394 retry.go:31] will retry after 760.722423ms: waiting for machine to come up
	I0213 23:08:27.731687   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:27.755515   49120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:27.774677   49120 ssh_runner.go:195] Run: openssl version
	I0213 23:08:27.780042   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:27.789684   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794384   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794443   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.800052   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:27.809570   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:27.818781   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823148   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823241   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.829043   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:27.839290   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:27.849614   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854661   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854720   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.860365   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:27.870548   49120 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:27.874967   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:27.880745   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:27.886409   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:27.892063   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:27.897857   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:27.903804   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:27.909720   49120 kubeadm.go:404] StartCluster: {Name:no-preload-778731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:27.909833   49120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:27.909924   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:27.951061   49120 cri.go:89] found id: ""
	I0213 23:08:27.951158   49120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:27.961916   49120 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:27.961941   49120 kubeadm.go:636] restartCluster start
	I0213 23:08:27.961993   49120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:27.971549   49120 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:27.972633   49120 kubeconfig.go:92] found "no-preload-778731" server: "https://192.168.83.31:8443"
	I0213 23:08:27.975092   49120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:27.983592   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:27.983650   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:27.993448   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.483988   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.484086   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.499804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.984581   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.984671   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.995887   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.484572   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.484680   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.496906   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.984503   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.984569   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.997813   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.484312   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.484391   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.501606   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.984144   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.984237   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.999418   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.483900   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.483977   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.498536   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.983688   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.983783   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.998804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:32.484556   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.484662   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:32.499238   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.147474   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:30.150438   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.150826   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:30.150857   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.151054   49443 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:30.155517   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:30.168463   49443 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:30.168543   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:30.210212   49443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:30.210296   49443 ssh_runner.go:195] Run: which lz4
	I0213 23:08:30.214665   49443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:30.219355   49443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:30.219383   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:32.244671   49443 crio.go:444] Took 2.030037 seconds to copy over tarball
	I0213 23:08:32.244757   49443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:33.043974   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044478   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:33.044417   50394 retry.go:31] will retry after 1.030870704s: waiting for machine to come up
	I0213 23:08:34.077209   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077662   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077692   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:34.077625   50394 retry.go:31] will retry after 1.450536952s: waiting for machine to come up
	I0213 23:08:35.529659   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530101   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530135   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:35.530042   50394 retry.go:31] will retry after 1.82898716s: waiting for machine to come up
	I0213 23:08:37.360889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361314   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361343   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:37.361270   50394 retry.go:31] will retry after 1.83473409s: waiting for machine to come up
	I0213 23:08:32.984096   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.984203   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.001189   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.483705   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.499694   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.983927   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.984057   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.999205   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.483708   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.483798   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.498840   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.984372   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.984461   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.999079   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.483661   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.497573   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.983985   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.984088   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.995899   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.484546   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.484660   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.496286   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.983902   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.984113   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.995778   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.484405   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.484518   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.495219   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.549721   49443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.304931423s)
	I0213 23:08:35.549748   49443 crio.go:451] Took 3.305051 seconds to extract the tarball
	I0213 23:08:35.549778   49443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:35.590195   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:35.640735   49443 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:35.640768   49443 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:35.640850   49443 ssh_runner.go:195] Run: crio config
	I0213 23:08:35.707018   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:35.707046   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:35.707072   49443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:35.707117   49443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.56 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-340656 NodeName:embed-certs-340656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:35.707294   49443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-340656"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:35.707405   49443 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-340656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:35.707483   49443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:35.717170   49443 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:35.717251   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:35.726586   49443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0213 23:08:35.744139   49443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:35.761480   49443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0213 23:08:35.779911   49443 ssh_runner.go:195] Run: grep 192.168.61.56	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:35.784152   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:35.799376   49443 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656 for IP: 192.168.61.56
	I0213 23:08:35.799417   49443 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:35.799601   49443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:35.799657   49443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:35.799766   49443 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/client.key
	I0213 23:08:35.799859   49443 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key.aef5f426
	I0213 23:08:35.799913   49443 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key
	I0213 23:08:35.800053   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:35.800091   49443 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:35.800107   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:35.800143   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:35.800180   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:35.800215   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:35.800276   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:35.801130   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:35.829983   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:35.856832   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:35.883713   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:35.910759   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:35.937208   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:35.963904   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:35.991562   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:36.022900   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:36.049084   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:36.074152   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:36.098863   49443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:36.115588   49443 ssh_runner.go:195] Run: openssl version
	I0213 23:08:36.120864   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:36.130552   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.134999   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.135068   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.140621   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:36.150963   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:36.160917   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165428   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165472   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.171493   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:36.181635   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:36.191753   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196368   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196444   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.201985   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:36.211839   49443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:36.216608   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:36.222594   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:36.228585   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:36.234646   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:36.240579   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:36.246642   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:36.252961   49443 kubeadm.go:404] StartCluster: {Name:embed-certs-340656 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:36.253087   49443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:36.253149   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:36.297601   49443 cri.go:89] found id: ""
	I0213 23:08:36.297705   49443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:36.308068   49443 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:36.308094   49443 kubeadm.go:636] restartCluster start
	I0213 23:08:36.308152   49443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:36.318071   49443 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.319274   49443 kubeconfig.go:92] found "embed-certs-340656" server: "https://192.168.61.56:8443"
	I0213 23:08:36.321573   49443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:36.331006   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.331059   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.342313   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.831994   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.832106   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.845071   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.331654   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.331724   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.344311   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.831903   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.831999   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.843671   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.331225   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.331337   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.349021   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.831196   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.831292   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.847050   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.332089   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.332162   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.348108   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.198188   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198570   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198596   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:39.198528   50394 retry.go:31] will retry after 2.722095348s: waiting for machine to come up
	I0213 23:08:41.923545   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923954   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923985   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:41.923904   50394 retry.go:31] will retry after 2.239772531s: waiting for machine to come up
	I0213 23:08:37.984640   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.984743   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.999300   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.999332   49120 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:37.999340   49120 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:37.999349   49120 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:37.999402   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:38.046199   49120 cri.go:89] found id: ""
	I0213 23:08:38.046287   49120 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:38.061697   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:38.071295   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:38.071378   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080401   49120 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:38.209853   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.403696   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193792627s)
	I0213 23:08:39.403733   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.602387   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.703317   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.783257   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:39.783347   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.284357   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.784437   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.284302   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.783582   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.284435   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.312653   49120 api_server.go:72] duration metric: took 2.529396171s to wait for apiserver process to appear ...
	I0213 23:08:42.312698   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:42.312719   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:42.313220   49120 api_server.go:269] stopped: https://192.168.83.31:8443/healthz: Get "https://192.168.83.31:8443/healthz": dial tcp 192.168.83.31:8443: connect: connection refused
	I0213 23:08:39.832020   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.832156   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.848229   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.331855   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.331992   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.347635   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.831070   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.831185   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.847184   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.331346   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.331444   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.346518   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.831081   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.831160   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.846752   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.331298   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.331389   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.348782   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.831278   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.831373   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.846241   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.331807   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.331876   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.346998   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.831697   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.831792   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.843733   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.331647   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.331762   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.343476   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.165021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165387   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:44.165357   50394 retry.go:31] will retry after 2.886798605s: waiting for machine to come up
	I0213 23:08:47.055186   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055880   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Found IP for machine: 192.168.39.3
	I0213 23:08:47.055923   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has current primary IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserving static IP address...
	I0213 23:08:47.056480   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.056512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserved static IP address: 192.168.39.3
	I0213 23:08:47.056537   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | skip adding static IP to network mk-default-k8s-diff-port-083863 - found existing host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"}
	I0213 23:08:47.056552   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Getting to WaitForSSH function...
	I0213 23:08:47.056567   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for SSH to be available...
	I0213 23:08:47.059414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059844   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.059882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059991   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH client type: external
	I0213 23:08:47.060025   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa (-rw-------)
	I0213 23:08:47.060061   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:47.060077   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | About to run SSH command:
	I0213 23:08:47.060093   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | exit 0
	I0213 23:08:47.154417   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:47.154807   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetConfigRaw
	I0213 23:08:47.155614   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.158506   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.158979   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.159005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.159297   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:08:47.159557   49715 machine.go:88] provisioning docker machine ...
	I0213 23:08:47.159577   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:47.159833   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160012   49715 buildroot.go:166] provisioning hostname "default-k8s-diff-port-083863"
	I0213 23:08:47.160038   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160240   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.163021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163444   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.163476   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163705   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.163908   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164070   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164234   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.164391   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.164762   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.164777   49715 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-083863 && echo "default-k8s-diff-port-083863" | sudo tee /etc/hostname
	I0213 23:08:47.304583   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-083863
	
	I0213 23:08:47.304617   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.307729   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308160   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.308196   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308345   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.308541   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308713   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308921   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.309148   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.309520   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.309539   49715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-083863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-083863/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-083863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:47.442924   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:47.442958   49715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:47.442989   49715 buildroot.go:174] setting up certificates
	I0213 23:08:47.443006   49715 provision.go:83] configureAuth start
	I0213 23:08:47.443024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.443287   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.446220   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446611   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.446646   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446821   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.449591   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.449920   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.449989   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.450162   49715 provision.go:138] copyHostCerts
	I0213 23:08:47.450221   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:47.450241   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:47.450305   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:47.450482   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:47.450497   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:47.450532   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:47.450614   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:47.450625   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:47.450651   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:47.450720   49715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-083863 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube default-k8s-diff-port-083863]
	I0213 23:08:47.522550   49715 provision.go:172] copyRemoteCerts
	I0213 23:08:47.522618   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:47.522647   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.525731   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526189   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.526230   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526410   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.526610   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.526814   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.526971   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:47.626666   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:42.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.095528   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:46.095564   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:46.095581   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.178470   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.178500   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.313729   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.318658   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.318686   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.813274   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.819766   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.819808   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.313432   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.325228   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:47.325274   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.819686   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:08:47.829842   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:08:47.829896   49120 api_server.go:131] duration metric: took 5.517189469s to wait for apiserver health ...
	I0213 23:08:47.829907   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:47.829915   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:47.831685   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:48.354933   49036 start.go:369] acquired machines lock for "old-k8s-version-245122" in 54.536117689s
	I0213 23:08:48.354988   49036 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:48.354996   49036 fix.go:54] fixHost starting: 
	I0213 23:08:48.355410   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:48.355447   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:48.375953   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0213 23:08:48.376414   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:48.376997   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:08:48.377034   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:48.377373   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:48.377578   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:08:48.377709   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:08:48.379630   49036 fix.go:102] recreateIfNeeded on old-k8s-version-245122: state=Stopped err=<nil>
	I0213 23:08:48.379660   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	W0213 23:08:48.379822   49036 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:48.381473   49036 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-245122" ...
	I0213 23:08:44.831390   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.831503   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.845068   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.331710   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.331800   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.343755   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.831306   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.831415   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.844972   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.331510   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:46.331596   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:46.343475   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.343509   49443 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:46.343520   49443 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:46.343532   49443 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:46.343595   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:46.388343   49443 cri.go:89] found id: ""
	I0213 23:08:46.388417   49443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:46.403792   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:46.413139   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:46.413197   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422541   49443 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422566   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:46.551204   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.427625   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.656205   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.776652   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.860844   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:47.860942   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.362058   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.861851   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:49.361973   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:47.655867   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 23:08:47.687226   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:47.719579   49715 provision.go:86] duration metric: configureAuth took 276.554247ms
	I0213 23:08:47.719610   49715 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:47.719857   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:47.719945   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.723023   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723353   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.723386   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723686   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.723889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724074   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724299   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.724469   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.724860   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.724878   49715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:48.093490   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:48.093519   49715 machine.go:91] provisioned docker machine in 933.948787ms
	I0213 23:08:48.093529   49715 start.go:300] post-start starting for "default-k8s-diff-port-083863" (driver="kvm2")
	I0213 23:08:48.093540   49715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:48.093553   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.093887   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:48.093922   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.096941   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097351   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.097385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097701   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.097936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.098145   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.098367   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.188626   49715 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:48.193282   49715 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:48.193320   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:48.193406   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:48.193500   49715 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:48.193597   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:48.202782   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:48.235000   49715 start.go:303] post-start completed in 141.454861ms
	I0213 23:08:48.235032   49715 fix.go:56] fixHost completed within 19.576181803s
	I0213 23:08:48.235051   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.238450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.238992   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.239024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.239320   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.239535   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239683   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239846   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.240085   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:48.240390   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:48.240401   49715 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:48.354769   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865728.300012904
	
	I0213 23:08:48.354799   49715 fix.go:206] guest clock: 1707865728.300012904
	I0213 23:08:48.354811   49715 fix.go:219] Guest: 2024-02-13 23:08:48.300012904 +0000 UTC Remote: 2024-02-13 23:08:48.235035663 +0000 UTC m=+225.644270499 (delta=64.977241ms)
	I0213 23:08:48.354837   49715 fix.go:190] guest clock delta is within tolerance: 64.977241ms
	I0213 23:08:48.354845   49715 start.go:83] releasing machines lock for "default-k8s-diff-port-083863", held for 19.696026805s
	I0213 23:08:48.354884   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.355246   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:48.358586   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359040   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.359081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359323   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.359961   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360127   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360200   49715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:48.360233   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.360372   49715 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:48.360398   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.363529   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.363715   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364166   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364357   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364394   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364461   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364656   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.364824   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370192   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.370221   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.370404   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370677   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.457230   49715 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:48.484954   49715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:48.636752   49715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:48.644369   49715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:48.644452   49715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:48.667562   49715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:48.667594   49715 start.go:475] detecting cgroup driver to use...
	I0213 23:08:48.667684   49715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:48.689737   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:48.708806   49715 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:48.708876   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:48.728530   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:48.746819   49715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:48.877519   49715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:49.069574   49715 docker.go:233] disabling docker service ...
	I0213 23:08:49.069661   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:49.103853   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:49.122356   49715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:49.272225   49715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:49.412111   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:49.428799   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:49.449679   49715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:49.449734   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.465458   49715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:49.465523   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.480399   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.494161   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.507964   49715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:49.522486   49715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:49.534468   49715 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:49.534538   49715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:49.554260   49715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:49.566868   49715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:49.725125   49715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:49.963096   49715 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:49.963172   49715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:49.970420   49715 start.go:543] Will wait 60s for crictl version
	I0213 23:08:49.970508   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:08:49.976177   49715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:50.024316   49715 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:50.024407   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.080031   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.133918   49715 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:48.382835   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Start
	I0213 23:08:48.383129   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring networks are active...
	I0213 23:08:48.384069   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network default is active
	I0213 23:08:48.384458   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network mk-old-k8s-version-245122 is active
	I0213 23:08:48.385051   49036 main.go:141] libmachine: (old-k8s-version-245122) Getting domain xml...
	I0213 23:08:48.387192   49036 main.go:141] libmachine: (old-k8s-version-245122) Creating domain...
	I0213 23:08:49.933195   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting to get IP...
	I0213 23:08:49.934463   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:49.935084   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:49.935109   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:49.934961   50565 retry.go:31] will retry after 206.578168ms: waiting for machine to come up
	I0213 23:08:50.143704   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.144239   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.144263   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.144177   50565 retry.go:31] will retry after 378.113433ms: waiting for machine to come up
	I0213 23:08:50.524043   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.524670   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.524703   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.524629   50565 retry.go:31] will retry after 468.261692ms: waiting for machine to come up
	I0213 23:08:50.995002   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.995616   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.995645   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.995524   50565 retry.go:31] will retry after 437.792222ms: waiting for machine to come up
	I0213 23:08:50.135427   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:50.139087   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139523   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:50.139556   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139840   49715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:50.145191   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:50.159814   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:50.159873   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:50.208873   49715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:50.208947   49715 ssh_runner.go:195] Run: which lz4
	I0213 23:08:50.214254   49715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:50.219979   49715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:50.220013   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:47.833116   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:47.862550   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:47.895377   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:47.919843   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:47.919894   49120 system_pods.go:61] "coredns-76f75df574-hgzcn" [a384c748-9d5b-4d07-b03c-5a65b3d7a450] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:47.919907   49120 system_pods.go:61] "etcd-no-preload-778731" [44169811-10f1-4d3e-8eaa-b525dd0f722f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:47.919920   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [126febb5-8d0b-4162-b320-7fd718b4a974] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:47.919929   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [a7be9641-1bd0-41f9-853a-73b522c60746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:47.919945   49120 system_pods.go:61] "kube-proxy-msxf7" [81201ce9-6f3d-457c-b582-eb8a17dbf4eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:47.919968   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [72f487c5-c42e-4e42-85c8-3b3df6bccd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:47.919984   49120 system_pods.go:61] "metrics-server-57f55c9bc5-r44rm" [ae0751b9-57fe-4d99-b41c-5c685b846e1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:47.919996   49120 system_pods.go:61] "storage-provisioner" [e1d157b3-7ce1-488c-a3ea-ab0e8da83fb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:47.920009   49120 system_pods.go:74] duration metric: took 24.606913ms to wait for pod list to return data ...
	I0213 23:08:47.920031   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:47.930765   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:47.930810   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:47.930827   49120 node_conditions.go:105] duration metric: took 10.783663ms to run NodePressure ...
	I0213 23:08:47.930848   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:48.401055   49120 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407167   49120 kubeadm.go:787] kubelet initialised
	I0213 23:08:48.407238   49120 kubeadm.go:788] duration metric: took 6.148946ms waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407260   49120 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:48.414170   49120 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:50.427883   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:52.431208   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:49.861114   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.361308   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.861249   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.894694   49443 api_server.go:72] duration metric: took 3.033850926s to wait for apiserver process to appear ...
	I0213 23:08:50.894724   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:50.894746   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:50.895231   49443 api_server.go:269] stopped: https://192.168.61.56:8443/healthz: Get "https://192.168.61.56:8443/healthz": dial tcp 192.168.61.56:8443: connect: connection refused
	I0213 23:08:51.394882   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:51.435131   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:51.435705   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:51.435733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:51.435616   50565 retry.go:31] will retry after 631.237829ms: waiting for machine to come up
	I0213 23:08:52.069120   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.069697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.069719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.069617   50565 retry.go:31] will retry after 756.691364ms: waiting for machine to come up
	I0213 23:08:52.828166   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.828631   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.828662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.828562   50565 retry.go:31] will retry after 761.909065ms: waiting for machine to come up
	I0213 23:08:53.592196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:53.592753   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:53.592779   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:53.592685   50565 retry.go:31] will retry after 1.153412106s: waiting for machine to come up
	I0213 23:08:54.747606   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:54.748184   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:54.748221   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:54.748113   50565 retry.go:31] will retry after 1.198347182s: waiting for machine to come up
	I0213 23:08:55.947978   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:55.948524   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:55.948545   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:55.948469   50565 retry.go:31] will retry after 2.116247229s: waiting for machine to come up
	I0213 23:08:52.713946   49715 crio.go:444] Took 2.499735 seconds to copy over tarball
	I0213 23:08:52.714030   49715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:56.483125   49715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.769061262s)
	I0213 23:08:56.483156   49715 crio.go:451] Took 3.769175 seconds to extract the tarball
	I0213 23:08:56.483167   49715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:56.524290   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:56.576319   49715 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:56.576349   49715 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:56.576435   49715 ssh_runner.go:195] Run: crio config
	I0213 23:08:56.633481   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:08:56.633514   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:56.633537   49715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:56.633561   49715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-083863 NodeName:default-k8s-diff-port-083863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:56.633744   49715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-083863"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:56.633838   49715 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-083863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0213 23:08:56.633930   49715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:56.643018   49715 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:56.643110   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:56.652116   49715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0213 23:08:56.670140   49715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:56.687456   49715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0213 23:08:56.707317   49715 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:56.711339   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:56.726090   49715 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863 for IP: 192.168.39.3
	I0213 23:08:56.726139   49715 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:56.726320   49715 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:56.726381   49715 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:56.726486   49715 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.key
	I0213 23:08:56.755690   49715 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key.599d509e
	I0213 23:08:56.755797   49715 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key
	I0213 23:08:56.755953   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:56.755996   49715 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:56.756008   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:56.756042   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:56.756072   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:56.756104   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:56.756157   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:56.756999   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:56.790072   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:56.821182   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:56.849753   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:56.875241   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:56.901057   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:56.929989   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:56.959488   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:56.991678   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:57.019756   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:57.047743   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:57.078812   49715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:57.097081   49715 ssh_runner.go:195] Run: openssl version
	I0213 23:08:57.103754   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:57.117364   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124069   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124160   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.132252   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:57.145398   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:57.158348   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164091   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164158   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.171693   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:57.185004   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:57.198410   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204432   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204495   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.210331   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:57.221567   49715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:57.226357   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:57.232307   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:57.239034   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:57.245485   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:57.252782   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:57.259406   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:57.265644   49715 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:57.265744   49715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:57.265820   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:57.313129   49715 cri.go:89] found id: ""
	I0213 23:08:57.313210   49715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:57.323716   49715 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:57.323747   49715 kubeadm.go:636] restartCluster start
	I0213 23:08:57.323837   49715 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:57.333805   49715 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.335100   49715 kubeconfig.go:92] found "default-k8s-diff-port-083863" server: "https://192.168.39.3:8444"
	I0213 23:08:57.337669   49715 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:57.347371   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.347434   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.359168   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:53.424206   49120 pod_ready.go:92] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:53.424235   49120 pod_ready.go:81] duration metric: took 5.01002772s waiting for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:53.424249   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:55.432858   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:54.636558   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.636595   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.636612   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.714679   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.714727   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.894910   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.909668   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:54.909716   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.395328   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.401124   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.401155   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.895827   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.901814   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.901848   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.395611   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.402367   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.402404   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.894889   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.900228   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.900267   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.394804   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.404774   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.404811   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.895090   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.902470   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.902527   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:58.395650   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:58.404727   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:08:58.413383   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:08:58.413425   49443 api_server.go:131] duration metric: took 7.518687282s to wait for apiserver health ...
	I0213 23:08:58.413437   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:58.413444   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:58.415682   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:58.417320   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:58.436763   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:58.468658   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:58.482719   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:58.482755   49443 system_pods.go:61] "coredns-5dd5756b68-h86p6" [9d274749-fe12-43c1-b30c-70586c04daf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:58.482762   49443 system_pods.go:61] "etcd-embed-certs-340656" [1fbdd834-b8c1-48c9-aab7-3c72d7012eca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:58.482770   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [3bb1cfb1-8fea-4b7a-a459-a709010ee6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:58.482783   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [f8035337-1819-4b0b-83eb-1992445c0185] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:58.482790   49443 system_pods.go:61] "kube-proxy-swxwt" [2bbc949c-f478-4c01-9e81-884a05a9a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:58.482795   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [923ef614-eef1-4e32-ae83-2e540841060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:58.482831   49443 system_pods.go:61] "metrics-server-57f55c9bc5-lmcwv" [a948cc5d-01b6-4298-a7c7-24d9704497d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:58.482846   49443 system_pods.go:61] "storage-provisioner" [9fc17bde-ff30-4ed7-829c-3d59badd55f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:58.482854   49443 system_pods.go:74] duration metric: took 14.17202ms to wait for pod list to return data ...
	I0213 23:08:58.482865   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:58.487666   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:58.487710   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:58.487723   49443 node_conditions.go:105] duration metric: took 4.851634ms to run NodePressure ...
	I0213 23:08:58.487743   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:59.044504   49443 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088347   49443 kubeadm.go:787] kubelet initialised
	I0213 23:08:59.088379   49443 kubeadm.go:788] duration metric: took 43.842389ms waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088390   49443 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:59.105292   49443 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.067162   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:58.067629   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:58.067662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:58.067589   50565 retry.go:31] will retry after 2.740013841s: waiting for machine to come up
	I0213 23:09:00.811129   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:00.811590   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:00.811623   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:00.811537   50565 retry.go:31] will retry after 3.449503247s: waiting for machine to come up
	I0213 23:08:57.848036   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.848128   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.863924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.348357   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.348539   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.364081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.848249   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.848321   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.860671   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.348282   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.348385   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.364226   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.847737   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.847838   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.864832   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.348231   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.348311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.360532   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.848115   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.848220   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.861558   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.348101   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.348192   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.360173   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.847696   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.847788   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.859631   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:02.348255   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.348353   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.363081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.943272   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:58.432531   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:58.432613   49120 pod_ready.go:81] duration metric: took 5.008354336s waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.432631   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:00.441099   49120 pod_ready.go:102] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:01.440207   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.440235   49120 pod_ready.go:81] duration metric: took 3.0075951s waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.440249   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446456   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.446483   49120 pod_ready.go:81] duration metric: took 6.224957ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446495   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452476   49120 pod_ready.go:92] pod "kube-proxy-msxf7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.452509   49120 pod_ready.go:81] duration metric: took 6.006176ms waiting for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452520   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457619   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.457640   49120 pod_ready.go:81] duration metric: took 5.112826ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457648   49120 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.113738   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:03.114003   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.262520   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:04.262989   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:04.263018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:04.262939   50565 retry.go:31] will retry after 3.540479459s: waiting for machine to come up
	I0213 23:09:02.847964   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.848073   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.863100   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.347510   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.347608   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.362561   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.847536   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.847635   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.863357   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.347939   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.348026   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.363027   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.847491   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.847576   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.858924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.347449   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.347527   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.359307   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.847845   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.847934   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.859530   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.348136   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.348231   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.360149   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.847699   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.847786   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.859859   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.347717   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:07.347806   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:07.360175   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.360211   49715 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:07.360223   49715 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:07.360234   49715 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:07.360304   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:07.400269   49715 cri.go:89] found id: ""
	I0213 23:09:07.400360   49715 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:07.416990   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:07.426513   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:07.426588   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436165   49715 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436197   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:07.602305   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:03.467176   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:05.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.614199   49443 pod_ready.go:92] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:04.614230   49443 pod_ready.go:81] duration metric: took 5.508903545s waiting for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:04.614244   49443 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:06.621198   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:08.622226   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:07.807018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:07.807577   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:07.807609   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:07.807519   50565 retry.go:31] will retry after 4.623412618s: waiting for machine to come up
	I0213 23:09:08.566096   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.757816   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.894570   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.984493   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:08.984609   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.485363   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.984792   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.485221   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.985649   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.485311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.516028   49715 api_server.go:72] duration metric: took 2.531534981s to wait for apiserver process to appear ...
	I0213 23:09:11.516054   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:11.516076   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:08.466006   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.965586   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.623965   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.623991   49443 pod_ready.go:81] duration metric: took 6.009738992s waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.624002   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631790   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.631813   49443 pod_ready.go:81] duration metric: took 7.802592ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631830   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638042   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.638065   49443 pod_ready.go:81] duration metric: took 6.226067ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638077   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645111   49443 pod_ready.go:92] pod "kube-proxy-swxwt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.645135   49443 pod_ready.go:81] duration metric: took 7.051124ms waiting for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645146   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651681   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.651703   49443 pod_ready.go:81] duration metric: took 6.550486ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651712   49443 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:12.659172   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:12.435133   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435720   49036 main.go:141] libmachine: (old-k8s-version-245122) Found IP for machine: 192.168.50.36
	I0213 23:09:12.435751   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has current primary IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435762   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserving static IP address...
	I0213 23:09:12.436196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.436241   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | skip adding static IP to network mk-old-k8s-version-245122 - found existing host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"}
	I0213 23:09:12.436262   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserved static IP address: 192.168.50.36
	I0213 23:09:12.436280   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting for SSH to be available...
	I0213 23:09:12.436296   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Getting to WaitForSSH function...
	I0213 23:09:12.438534   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.438892   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.438925   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.439062   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH client type: external
	I0213 23:09:12.439099   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa (-rw-------)
	I0213 23:09:12.439149   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:09:12.439183   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | About to run SSH command:
	I0213 23:09:12.439202   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | exit 0
	I0213 23:09:12.541930   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | SSH cmd err, output: <nil>: 
	I0213 23:09:12.542357   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetConfigRaw
	I0213 23:09:12.543071   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.546226   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546714   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.546747   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546955   49036 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/config.json ...
	I0213 23:09:12.547163   49036 machine.go:88] provisioning docker machine ...
	I0213 23:09:12.547200   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:12.547445   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547594   49036 buildroot.go:166] provisioning hostname "old-k8s-version-245122"
	I0213 23:09:12.547615   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547770   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.550250   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.550734   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550939   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.551160   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551322   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.551648   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.551974   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.552000   49036 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245122 && echo "old-k8s-version-245122" | sudo tee /etc/hostname
	I0213 23:09:12.705495   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245122
	
	I0213 23:09:12.705528   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.708503   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.708860   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.708893   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.709092   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.709277   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709657   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.709831   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.710263   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.710285   49036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245122/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:09:12.858225   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:09:12.858266   49036 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:09:12.858287   49036 buildroot.go:174] setting up certificates
	I0213 23:09:12.858300   49036 provision.go:83] configureAuth start
	I0213 23:09:12.858313   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.858624   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.861374   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861727   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.861759   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.864007   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864334   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.864370   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864549   49036 provision.go:138] copyHostCerts
	I0213 23:09:12.864627   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:09:12.864643   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:09:12.864728   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:09:12.864853   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:09:12.864868   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:09:12.864904   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:09:12.865008   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:09:12.865018   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:09:12.865049   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:09:12.865130   49036 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245122 san=[192.168.50.36 192.168.50.36 localhost 127.0.0.1 minikube old-k8s-version-245122]
	I0213 23:09:12.938444   49036 provision.go:172] copyRemoteCerts
	I0213 23:09:12.938508   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:09:12.938530   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.941384   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.941758   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941989   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.942202   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.942394   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.942545   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.041212   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:09:13.069849   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 23:09:13.092979   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:09:13.115949   49036 provision.go:86] duration metric: configureAuth took 257.625697ms
	I0213 23:09:13.115983   49036 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:09:13.116196   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:13.116279   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.119207   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119644   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.119684   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119901   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.120096   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120288   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120443   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.120599   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.121149   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.121179   49036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:09:13.453399   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:09:13.453431   49036 machine.go:91] provisioned docker machine in 906.25243ms
	I0213 23:09:13.453444   49036 start.go:300] post-start starting for "old-k8s-version-245122" (driver="kvm2")
	I0213 23:09:13.453459   49036 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:09:13.453479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.453816   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:09:13.453849   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.457033   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457355   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.457388   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457560   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.457778   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.457991   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.458207   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.559903   49036 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:09:13.566012   49036 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:09:13.566046   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:09:13.566119   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:09:13.566215   49036 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:09:13.566336   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:09:13.578878   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:13.610396   49036 start.go:303] post-start completed in 156.935564ms
	I0213 23:09:13.610434   49036 fix.go:56] fixHost completed within 25.25543712s
	I0213 23:09:13.610459   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.613960   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614271   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.614330   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614575   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.614828   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615081   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615275   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.615494   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.615954   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.615977   49036 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:09:13.759068   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865753.693690059
	
	I0213 23:09:13.759095   49036 fix.go:206] guest clock: 1707865753.693690059
	I0213 23:09:13.759106   49036 fix.go:219] Guest: 2024-02-13 23:09:13.693690059 +0000 UTC Remote: 2024-02-13 23:09:13.610438113 +0000 UTC m=+362.380845041 (delta=83.251946ms)
	I0213 23:09:13.759130   49036 fix.go:190] guest clock delta is within tolerance: 83.251946ms
	I0213 23:09:13.759136   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 25.404173426s
	I0213 23:09:13.759161   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.759480   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:13.762537   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.762928   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.762967   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.763172   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763718   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763907   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763998   49036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:09:13.764050   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.764122   49036 ssh_runner.go:195] Run: cat /version.json
	I0213 23:09:13.764149   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.767081   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767387   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767526   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767558   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767736   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.767812   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767834   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.768002   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.768190   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768220   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768343   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768370   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.768490   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.886145   49036 ssh_runner.go:195] Run: systemctl --version
	I0213 23:09:13.892222   49036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:09:14.044107   49036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:09:14.051031   49036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:09:14.051134   49036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:09:14.071908   49036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:09:14.071942   49036 start.go:475] detecting cgroup driver to use...
	I0213 23:09:14.072026   49036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:09:14.091007   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:09:14.105419   49036 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:09:14.105501   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:09:14.120760   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:09:14.135296   49036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:09:14.267338   49036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:09:14.403936   49036 docker.go:233] disabling docker service ...
	I0213 23:09:14.404023   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:09:14.419791   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:09:14.434449   49036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:09:14.569365   49036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:09:14.700619   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:09:14.718646   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:09:14.738870   49036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0213 23:09:14.738944   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.750436   49036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:09:14.750529   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.762397   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.773950   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.786798   49036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:09:14.801457   49036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:09:14.813254   49036 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:09:14.813331   49036 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:09:14.830374   49036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:09:14.840984   49036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:09:14.994777   49036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:09:15.193564   49036 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:09:15.193657   49036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:09:15.200616   49036 start.go:543] Will wait 60s for crictl version
	I0213 23:09:15.200749   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:15.205888   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:09:15.249751   49036 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:09:15.249884   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.302320   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.361046   49036 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0213 23:09:15.362396   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:15.365548   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366008   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:15.366041   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366287   49036 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:09:15.370727   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:15.384064   49036 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:09:15.384171   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:15.432027   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:15.432110   49036 ssh_runner.go:195] Run: which lz4
	I0213 23:09:15.436393   49036 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:09:15.440914   49036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:09:15.440956   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0213 23:09:15.218410   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:15.218442   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:15.218457   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.346077   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.346112   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:15.516188   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.523339   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.523371   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.016747   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.024910   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.024944   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.516538   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.528640   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.528673   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:17.016269   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:17.022413   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:09:17.033775   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:09:17.033807   49715 api_server.go:131] duration metric: took 5.51774459s to wait for apiserver health ...
	I0213 23:09:17.033819   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:09:17.033828   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:17.035635   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:17.037195   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:17.064472   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:17.115519   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:17.133771   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:09:17.133887   49715 system_pods.go:61] "coredns-5dd5756b68-cvtjg" [507ded52-9061-4ab7-8298-31847da5dad3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:09:17.133914   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [2ef46644-d4d0-4e8c-b2aa-4e154780be70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:09:17.133952   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [c1f51407-cfd9-4329-9153-2dacb87952c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:09:17.133975   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [1ad24825-8c75-4220-a316-2dd4826da8fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:09:17.133995   49715 system_pods.go:61] "kube-proxy-zzskr" [fb71ceb1-9f9a-4c8b-ae1e-1eeb91706110] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:09:17.134015   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [4500697c-7313-4217-9843-14edb2c7fdb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:09:17.134042   49715 system_pods.go:61] "metrics-server-57f55c9bc5-p97jh" [dc549bc9-87e4-4cb6-99b5-e937f2916d6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:09:17.134063   49715 system_pods.go:61] "storage-provisioner" [c5ad957d-09f9-46e7-b0e7-e7c0b13f671f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:09:17.134081   49715 system_pods.go:74] duration metric: took 18.533785ms to wait for pod list to return data ...
	I0213 23:09:17.134103   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:17.145025   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:17.145131   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:17.145159   49715 node_conditions.go:105] duration metric: took 11.041762ms to run NodePressure ...
	I0213 23:09:17.145201   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:13.466367   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:15.966324   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:14.661158   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:16.663448   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:19.164418   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.224597   49036 crio.go:444] Took 1.788234 seconds to copy over tarball
	I0213 23:09:17.224685   49036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:09:20.618866   49036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.394137292s)
	I0213 23:09:20.618905   49036 crio.go:451] Took 3.394273 seconds to extract the tarball
	I0213 23:09:20.618918   49036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:09:20.665417   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:20.718004   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:20.718036   49036 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.718175   49036 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.718201   49036 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.718126   49036 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.718148   49036 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.718154   49036 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.718181   49036 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719739   49036 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719784   49036 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.719745   49036 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.719855   49036 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.719951   49036 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.720062   49036 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 23:09:20.720172   49036 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.720184   49036 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.877532   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.894803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.906336   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.909341   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.910608   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 23:09:20.933612   49036 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 23:09:20.933664   49036 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.933724   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:20.947803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.979922   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.026909   49036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 23:09:21.026953   49036 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.026986   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.034243   49036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 23:09:21.034279   49036 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.034321   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.053547   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:21.068143   49036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 23:09:21.068194   49036 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 23:09:21.068228   49036 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0213 23:09:21.068195   49036 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0213 23:09:21.068318   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.110630   49036 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 23:09:21.110695   49036 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.110747   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.120732   49036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 23:09:21.120777   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.120781   49036 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.120851   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.120887   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.272660   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0213 23:09:21.272723   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 23:09:21.272771   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.272813   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.272858   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 23:09:21.272914   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.272966   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 23:09:17.706218   49715 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713293   49715 kubeadm.go:787] kubelet initialised
	I0213 23:09:17.713322   49715 kubeadm.go:788] duration metric: took 7.076014ms waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713332   49715 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:17.724146   49715 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:19.733686   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.412892   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.970757   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:20.466081   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.467149   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.660264   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:23.660813   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.375314   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 23:09:21.376306   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 23:09:21.376453   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 23:09:21.376491   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 23:09:21.585135   49036 cache_images.go:92] LoadImages completed in 867.071904ms
	W0213 23:09:21.585230   49036 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0213 23:09:21.585316   49036 ssh_runner.go:195] Run: crio config
	I0213 23:09:21.650741   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:21.650767   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:21.650789   49036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:09:21.650812   49036 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245122 NodeName:old-k8s-version-245122 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 23:09:21.650991   49036 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-245122"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-245122
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.36:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:09:21.651106   49036 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-245122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:09:21.651173   49036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 23:09:21.662478   49036 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:09:21.662558   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:09:21.672654   49036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0213 23:09:21.690609   49036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:09:21.708199   49036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0213 23:09:21.728361   49036 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0213 23:09:21.732450   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:21.747349   49036 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122 for IP: 192.168.50.36
	I0213 23:09:21.747391   49036 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:21.747532   49036 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:09:21.747582   49036 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:09:21.747644   49036 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.key
	I0213 23:09:21.958574   49036 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key.e3c4a843
	I0213 23:09:21.958790   49036 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key
	I0213 23:09:21.958978   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:09:21.959024   49036 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:09:21.959040   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:09:21.959090   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:09:21.959135   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:09:21.959168   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:09:21.959234   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:21.960121   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:09:21.986921   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:09:22.011993   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:09:22.038194   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:09:22.064839   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:09:22.089629   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:09:22.116404   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:09:22.141615   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:09:22.167298   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:09:22.194577   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:09:22.220140   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:09:22.245124   49036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:09:22.265798   49036 ssh_runner.go:195] Run: openssl version
	I0213 23:09:22.273510   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:09:22.287657   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294180   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294261   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.300826   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:09:22.313535   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:09:22.324047   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329069   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329171   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.335862   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:09:22.347417   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:09:22.358082   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363477   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363536   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.369915   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:09:22.380910   49036 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:09:22.385812   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:09:22.392981   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:09:22.400722   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:09:22.409089   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:09:22.417036   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:09:22.423381   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:09:22.430098   49036 kubeadm.go:404] StartCluster: {Name:old-k8s-version-245122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:09:22.430177   49036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:09:22.430246   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:22.490283   49036 cri.go:89] found id: ""
	I0213 23:09:22.490371   49036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:09:22.500902   49036 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:09:22.500931   49036 kubeadm.go:636] restartCluster start
	I0213 23:09:22.501004   49036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:09:22.511985   49036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:22.513298   49036 kubeconfig.go:92] found "old-k8s-version-245122" server: "https://192.168.50.36:8443"
	I0213 23:09:22.516673   49036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:09:22.526466   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:22.526561   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:22.539541   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.027052   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.027161   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.039390   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.527142   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.527234   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.539846   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.027048   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.027144   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.038367   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.526911   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.527012   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.538906   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.027095   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.027195   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.038232   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.526805   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.526911   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.540281   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:26.026811   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.026908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.039699   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.238007   49715 pod_ready.go:92] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:23.238035   49715 pod_ready.go:81] duration metric: took 5.513854942s waiting for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:23.238051   49715 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.744985   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:24.745007   49715 pod_ready.go:81] duration metric: took 1.506948533s waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.745015   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:26.751610   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:24.965048   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:27.465069   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.159564   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:28.660224   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.527051   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.527135   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.539382   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.026915   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.026990   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.038660   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.527300   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.527391   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.539714   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.027042   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.027124   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.039419   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.527549   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.527649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.540659   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.027032   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.027134   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.038415   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.526595   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.526690   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.538928   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.027041   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.027119   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.040125   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.526693   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.526765   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.540060   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:31.026988   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.027096   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.039327   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.755419   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.254128   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.254154   49715 pod_ready.go:81] duration metric: took 6.509132102s waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.254164   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262007   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.262032   49715 pod_ready.go:81] duration metric: took 7.859557ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262042   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267937   49715 pod_ready.go:92] pod "kube-proxy-zzskr" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.267959   49715 pod_ready.go:81] duration metric: took 5.911683ms waiting for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267967   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273442   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.273462   49715 pod_ready.go:81] duration metric: took 5.488135ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273471   49715 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:29.466908   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.965093   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.159176   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.159463   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.526738   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.526879   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.539174   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.026678   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.026780   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.039078   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.527030   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.527120   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.539058   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.539094   49036 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:32.539105   49036 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:32.539116   49036 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:32.539188   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:32.583832   49036 cri.go:89] found id: ""
	I0213 23:09:32.583931   49036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:32.600343   49036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:32.609666   49036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:32.609744   49036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619068   49036 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619093   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:32.751642   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:33.784796   49036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03311496s)
	I0213 23:09:33.784825   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.013311   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.172539   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.290655   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:34.290759   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:34.791649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.290908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.791035   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:33.283651   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.798120   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.966930   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.465311   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.160502   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:37.163077   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.291009   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.791117   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.809796   49036 api_server.go:72] duration metric: took 2.519141205s to wait for apiserver process to appear ...
	I0213 23:09:36.809851   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:36.809880   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:38.282180   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.282368   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:38.466126   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.967293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.811101   49036 api_server.go:269] stopped: https://192.168.50.36:8443/healthz: Get "https://192.168.50.36:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 23:09:41.811184   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.485465   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.485495   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.485516   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.539632   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.539667   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.809967   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.823007   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:42.823043   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.310359   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.318326   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:43.318384   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.809942   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.816666   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:09:43.824593   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:09:43.824622   49036 api_server.go:131] duration metric: took 7.014763564s to wait for apiserver health ...
	I0213 23:09:43.824639   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:43.824647   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:43.826660   49036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:39.659667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.660321   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.664984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.827993   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:43.837268   49036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:43.855659   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:43.864719   49036 system_pods.go:59] 7 kube-system pods found
	I0213 23:09:43.864756   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:09:43.864764   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:09:43.864770   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:09:43.864778   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Pending
	I0213 23:09:43.864783   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:09:43.864789   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:09:43.864795   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:09:43.864803   49036 system_pods.go:74] duration metric: took 9.113954ms to wait for pod list to return data ...
	I0213 23:09:43.864812   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:43.872183   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:43.872222   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:43.872237   49036 node_conditions.go:105] duration metric: took 7.415138ms to run NodePressure ...
	I0213 23:09:43.872269   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:44.129786   49036 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134864   49036 kubeadm.go:787] kubelet initialised
	I0213 23:09:44.134891   49036 kubeadm.go:788] duration metric: took 5.071047ms waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134901   49036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:44.139027   49036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.143942   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143967   49036 pod_ready.go:81] duration metric: took 4.910454ms waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.143978   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143986   49036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.147838   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147923   49036 pod_ready.go:81] duration metric: took 3.927311ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.147935   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147944   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.152465   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152490   49036 pod_ready.go:81] duration metric: took 4.536109ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.152500   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152508   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.259273   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259309   49036 pod_ready.go:81] duration metric: took 106.789068ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.259325   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259334   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.659385   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659423   49036 pod_ready.go:81] duration metric: took 400.079528ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.659436   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659443   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:45.065474   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065510   49036 pod_ready.go:81] duration metric: took 406.055078ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:45.065524   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065533   49036 pod_ready.go:38] duration metric: took 930.621868ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:45.065555   49036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:09:45.100009   49036 ops.go:34] apiserver oom_adj: -16
	I0213 23:09:45.100037   49036 kubeadm.go:640] restartCluster took 22.599099367s
	I0213 23:09:45.100049   49036 kubeadm.go:406] StartCluster complete in 22.6699561s
	I0213 23:09:45.100070   49036 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.100156   49036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:09:45.103031   49036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.103315   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:09:45.103447   49036 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:09:45.103540   49036 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103562   49036 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-245122"
	I0213 23:09:45.103571   49036 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103593   49036 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:45.103603   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:45.103638   49036 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103693   49036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245122"
	W0213 23:09:45.103608   49036 addons.go:243] addon metrics-server should already be in state true
	W0213 23:09:45.103577   49036 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:09:45.103879   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104144   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104215   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104227   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.104318   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.103829   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104877   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104904   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.123332   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0213 23:09:45.123486   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0213 23:09:45.123555   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0213 23:09:45.123964   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124143   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124148   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124449   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124469   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124650   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124674   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124654   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124743   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124965   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125030   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125083   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.125564   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125567   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125598   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.125612   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.129046   49036 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-245122"
	W0213 23:09:45.129065   49036 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:09:45.129085   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.129385   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.129415   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.145900   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0213 23:09:45.146570   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.147144   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.147164   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.147448   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.147635   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.156023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.158533   49036 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:09:45.159815   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:09:45.159837   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:09:45.159862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.163799   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164445   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.164472   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164859   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.165112   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.165340   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.165523   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.166097   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0213 23:09:45.166513   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.167086   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.167111   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.167442   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.167623   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.168284   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0213 23:09:45.168855   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.169453   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.169471   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.169702   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.169992   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.171532   49036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:45.170687   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.172965   49036 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.172979   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.172983   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:09:45.173009   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.176733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177198   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.177232   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177269   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.177506   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.177675   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.177885   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.190339   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0213 23:09:45.190750   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.191239   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.191267   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.191609   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.191803   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.193470   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.193730   49036 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.193748   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:09:45.193769   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.196896   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197422   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.197459   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197745   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.197935   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.198191   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.198301   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.392787   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:09:45.392808   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:09:45.426298   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.440984   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.452209   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:09:45.452239   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:09:45.531203   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:45.531226   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:09:45.593779   49036 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 23:09:45.621016   49036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-245122" context rescaled to 1 replicas
	I0213 23:09:45.621056   49036 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:09:45.623081   49036 out.go:177] * Verifying Kubernetes components...
	I0213 23:09:45.624623   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:09:45.631546   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:46.116692   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116732   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.116735   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116736   49036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:46.116754   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117125   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117172   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117183   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117192   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117201   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117203   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117218   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117228   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117247   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117667   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117671   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117708   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117728   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117962   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117980   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140111   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.140133   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.140411   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.140441   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140431   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.228877   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.228908   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229250   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229273   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229273   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.229283   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.229293   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229523   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229538   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229558   49036 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:46.231176   49036 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:09:46.232329   49036 addons.go:505] enable addons completed in 1.128872958s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:09:42.783163   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:44.783634   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.281934   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.465665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:45.964909   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:46.160084   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.664267   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.120153   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:50.120636   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:49.781808   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.281392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.968701   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:50.465488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:51.161059   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:53.662099   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.121578   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:53.120859   49036 node_ready.go:49] node "old-k8s-version-245122" has status "Ready":"True"
	I0213 23:09:53.120885   49036 node_ready.go:38] duration metric: took 7.004121529s waiting for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:53.120896   49036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:53.129174   49036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:55.136200   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.283011   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.286197   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.964530   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.964679   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.966183   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.159475   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.160233   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:57.636373   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.137616   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.782611   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:59.465313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.465877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.660202   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.159244   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:02.635052   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:04.636231   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.284083   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.781701   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.966234   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.465225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.160136   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.160817   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.161703   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.636789   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.135398   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.135441   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.782000   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.782948   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.785161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:08.465688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:10.967225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.658937   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.661460   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.138346   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.636437   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:14.282538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.781339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.465521   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.965224   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.162065   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:18.658525   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.648838   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.137226   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:19.282514   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:21.781917   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.966716   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.464644   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.465071   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.659514   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.662481   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.636371   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.136197   49036 pod_ready.go:92] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.136234   49036 pod_ready.go:81] duration metric: took 31.007029263s waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.136249   49036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142089   49036 pod_ready.go:92] pod "etcd-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.142114   49036 pod_ready.go:81] duration metric: took 5.854061ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142127   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149372   49036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.149396   49036 pod_ready.go:81] duration metric: took 7.261015ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149409   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158342   49036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.158371   49036 pod_ready.go:81] duration metric: took 8.953577ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158384   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165154   49036 pod_ready.go:92] pod "kube-proxy-nj7qx" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.165177   49036 pod_ready.go:81] duration metric: took 6.785683ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165186   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533838   49036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.533863   49036 pod_ready.go:81] duration metric: took 368.670292ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533896   49036 pod_ready.go:38] duration metric: took 31.412988042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:10:24.533912   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:10:24.534007   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:10:24.549186   49036 api_server.go:72] duration metric: took 38.928101792s to wait for apiserver process to appear ...
	I0213 23:10:24.549217   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:10:24.549238   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:10:24.557366   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:10:24.558364   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:10:24.558387   49036 api_server.go:131] duration metric: took 9.165129ms to wait for apiserver health ...
	I0213 23:10:24.558396   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:10:24.736365   49036 system_pods.go:59] 8 kube-system pods found
	I0213 23:10:24.736396   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:24.736401   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:24.736405   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:24.736409   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:24.736413   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:24.736417   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:24.736423   49036 system_pods.go:61] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:24.736429   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:24.736437   49036 system_pods.go:74] duration metric: took 178.035411ms to wait for pod list to return data ...
	I0213 23:10:24.736444   49036 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:10:24.934360   49036 default_sa.go:45] found service account: "default"
	I0213 23:10:24.934390   49036 default_sa.go:55] duration metric: took 197.940334ms for default service account to be created ...
	I0213 23:10:24.934400   49036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:10:25.135904   49036 system_pods.go:86] 8 kube-system pods found
	I0213 23:10:25.135933   49036 system_pods.go:89] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:25.135940   49036 system_pods.go:89] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:25.135944   49036 system_pods.go:89] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:25.135949   49036 system_pods.go:89] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:25.135954   49036 system_pods.go:89] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:25.135959   49036 system_pods.go:89] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:25.135967   49036 system_pods.go:89] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:25.135973   49036 system_pods.go:89] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:25.135982   49036 system_pods.go:126] duration metric: took 201.576732ms to wait for k8s-apps to be running ...
	I0213 23:10:25.135992   49036 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:10:25.136035   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:10:25.151540   49036 system_svc.go:56] duration metric: took 15.53628ms WaitForService to wait for kubelet.
	I0213 23:10:25.151582   49036 kubeadm.go:581] duration metric: took 39.530502672s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:10:25.151608   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:10:25.333026   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:10:25.333067   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:10:25.333083   49036 node_conditions.go:105] duration metric: took 181.468311ms to run NodePressure ...
	I0213 23:10:25.333171   49036 start.go:228] waiting for startup goroutines ...
	I0213 23:10:25.333186   49036 start.go:233] waiting for cluster config update ...
	I0213 23:10:25.333200   49036 start.go:242] writing updated cluster config ...
	I0213 23:10:25.333540   49036 ssh_runner.go:195] Run: rm -f paused
	I0213 23:10:25.385974   49036 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0213 23:10:25.388225   49036 out.go:177] 
	W0213 23:10:25.389965   49036 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0213 23:10:25.391288   49036 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0213 23:10:25.392550   49036 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-245122" cluster and "default" namespace by default
	I0213 23:10:24.281840   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.782341   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.467427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.965363   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:25.158811   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:27.158903   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.162245   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.283592   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.781156   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.465534   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.965570   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.163299   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.664184   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:34.281475   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.282050   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.966548   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.465588   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.159425   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.161056   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.781806   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.782565   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.465618   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.966613   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.659031   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.660105   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:43.282453   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.782436   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.967065   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.465277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.161783   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.659092   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:48.281903   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:50.782326   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.965978   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.972688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:52.464489   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.661150   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:51.661183   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.159746   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:53.280877   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:55.281432   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.465386   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.966020   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.659863   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.161127   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:57.781250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:00.283244   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.464959   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.466871   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.660636   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:04.162081   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:02.782971   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.282593   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:03.964986   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.967545   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:06.660761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.663916   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:07.783437   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.280975   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.281595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.466954   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.965354   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:11.159761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:13.160656   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:14.281819   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:16.781331   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.965830   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.464980   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.659894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.659996   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:18.782849   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.281343   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.965490   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.965841   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:22.465427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.660194   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.660348   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.158929   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:23.281731   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:25.282299   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.966008   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.463392   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:26.160687   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:28.160792   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.783770   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.282652   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:29.464941   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:31.965436   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.160850   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.661971   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.781595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.282110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:33.966260   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:36.465148   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.160093   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.160571   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.782870   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.281536   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:38.466898   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.965121   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:39.659930   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.160848   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.782134   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.287871   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.966494   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:45.465485   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.477988   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.659259   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:46.660566   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.165414   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.781501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.282150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.965827   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.465337   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:51.658915   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.160444   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.286142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.783072   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.465900   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.466029   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.659103   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.660419   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.784481   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.282749   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.965179   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.465662   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:00.661165   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.161035   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.787946   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:06.281932   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.964460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.966240   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.660384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.159544   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.781709   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.782556   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.465300   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.472665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.660651   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.159097   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.281500   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.781953   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:12.965510   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:14.966435   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.465559   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.160583   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.659605   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.784167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:20.280384   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:22.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.468825   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.965088   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.659644   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.662561   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.160923   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.781351   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:27.281938   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:23.966646   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.465094   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.160986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.161300   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:29.780690   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.282298   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.965450   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:31.467937   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:30.659169   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.659681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.782495   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.782679   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:33.965594   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.465409   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.660174   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.660802   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.160838   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.281205   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.281734   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:38.465702   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:40.965477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.659732   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:44.159873   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:43.780979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.781438   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:42.966342   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.464993   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.465742   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:46.162330   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:48.659964   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.782513   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:50.281255   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:52.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:49.967402   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.968499   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.161451   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:53.659594   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.782653   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.782779   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.465429   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.466199   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:55.659986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:57.661028   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:59.280842   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.281110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:58.965410   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:00.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.458755   49120 pod_ready.go:81] duration metric: took 4m0.00109163s waiting for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:01.458812   49120 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:01.458839   49120 pod_ready.go:38] duration metric: took 4m13.051566827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:01.458873   49120 kubeadm.go:640] restartCluster took 4m33.496925279s
	W0213 23:13:01.458967   49120 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:01.459008   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:00.160188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:02.663549   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:03.285939   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.782469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.165196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:07.661417   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:08.283394   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.286257   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.161461   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.652828   49443 pod_ready.go:81] duration metric: took 4m0.001101625s waiting for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:10.652857   49443 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:10.652877   49443 pod_ready.go:38] duration metric: took 4m11.564476633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:10.652905   49443 kubeadm.go:640] restartCluster took 4m34.344806193s
	W0213 23:13:10.652970   49443 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:10.652997   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:12.782042   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:15.282782   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:16.418651   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.959611919s)
	I0213 23:13:16.418750   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:16.435137   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:16.448436   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:16.459777   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:16.459826   49120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:16.708111   49120 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:17.782474   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:20.283238   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:22.782418   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:24.782894   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:26.784203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:28.667785   49120 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0213 23:13:28.667865   49120 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:28.668000   49120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:28.668151   49120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:28.668282   49120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:28.668372   49120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:28.670147   49120 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:28.670266   49120 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:28.670367   49120 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:28.670480   49120 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:28.670559   49120 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:28.670674   49120 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:28.670763   49120 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:28.670864   49120 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:28.670964   49120 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:28.671068   49120 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:28.671163   49120 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:28.671221   49120 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:28.671296   49120 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:28.671368   49120 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:28.671440   49120 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0213 23:13:28.671506   49120 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:28.671580   49120 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:28.671658   49120 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:28.671734   49120 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:28.671791   49120 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:28.673351   49120 out.go:204]   - Booting up control plane ...
	I0213 23:13:28.673448   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:28.673535   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:28.673627   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:28.673744   49120 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:28.673846   49120 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:28.673903   49120 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:28.674084   49120 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:28.674176   49120 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.010705 seconds
	I0213 23:13:28.674315   49120 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:28.674470   49120 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:28.674543   49120 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:28.674766   49120 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-778731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:28.674832   49120 kubeadm.go:322] [bootstrap-token] Using token: dwjaqi.e4fr4bxqfdq63m9e
	I0213 23:13:28.676266   49120 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:28.676392   49120 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:28.676495   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:28.676671   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:28.676871   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:28.677028   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:28.677142   49120 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:28.677283   49120 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:28.677337   49120 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:28.677392   49120 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:28.677405   49120 kubeadm.go:322] 
	I0213 23:13:28.677476   49120 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:28.677488   49120 kubeadm.go:322] 
	I0213 23:13:28.677586   49120 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:28.677599   49120 kubeadm.go:322] 
	I0213 23:13:28.677631   49120 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:28.677712   49120 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:28.677780   49120 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:28.677793   49120 kubeadm.go:322] 
	I0213 23:13:28.677864   49120 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:28.677881   49120 kubeadm.go:322] 
	I0213 23:13:28.677941   49120 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:28.677948   49120 kubeadm.go:322] 
	I0213 23:13:28.678019   49120 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:28.678125   49120 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:28.678215   49120 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:28.678223   49120 kubeadm.go:322] 
	I0213 23:13:28.678324   49120 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:28.678426   49120 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:28.678433   49120 kubeadm.go:322] 
	I0213 23:13:28.678544   49120 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.678685   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:28.678714   49120 kubeadm.go:322] 	--control-plane 
	I0213 23:13:28.678722   49120 kubeadm.go:322] 
	I0213 23:13:28.678834   49120 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:28.678841   49120 kubeadm.go:322] 
	I0213 23:13:28.678950   49120 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.679094   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:28.679106   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:13:28.679116   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:28.680826   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:25.241610   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.588591305s)
	I0213 23:13:25.241679   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:25.257221   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:25.271651   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:25.285556   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:25.285615   49443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:25.530438   49443 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:29.281713   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:31.274625   49715 pod_ready.go:81] duration metric: took 4m0.00114055s waiting for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:31.274654   49715 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:31.274676   49715 pod_ready.go:38] duration metric: took 4m13.561333764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:31.274700   49715 kubeadm.go:640] restartCluster took 4m33.95094669s
	W0213 23:13:31.274766   49715 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:31.274807   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:28.682020   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:28.710027   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:28.752989   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:28.753118   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:28.753117   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=no-preload-778731 minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.147657   49120 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:29.147806   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.647920   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.648105   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.148819   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.648877   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.647939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.005257   49443 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:37.005340   49443 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:37.005464   49443 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:37.005611   49443 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:37.005750   49443 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:37.005836   49443 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:37.007501   49443 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:37.007606   49443 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:37.007687   49443 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:37.007782   49443 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:37.007869   49443 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:37.007960   49443 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:37.008047   49443 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:37.008139   49443 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:37.008221   49443 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:37.008324   49443 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:37.008437   49443 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:37.008488   49443 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:37.008577   49443 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:37.008657   49443 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:37.008742   49443 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:37.008837   49443 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:37.008916   49443 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:37.009044   49443 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:37.009150   49443 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:37.010808   49443 out.go:204]   - Booting up control plane ...
	I0213 23:13:37.010943   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:37.011053   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:37.011155   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:37.011537   49443 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:37.011661   49443 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:37.011720   49443 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:37.011915   49443 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:37.012024   49443 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005842 seconds
	I0213 23:13:37.012154   49443 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:37.012297   49443 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:37.012376   49443 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:37.012595   49443 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-340656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:37.012668   49443 kubeadm.go:322] [bootstrap-token] Using token: 0y2cx5.j4vucgv3wtut6xkw
	I0213 23:13:37.014296   49443 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:37.014433   49443 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:37.014535   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:37.014697   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:37.014837   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:37.014966   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:37.015073   49443 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:37.015203   49443 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:37.015256   49443 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:37.015316   49443 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:37.015326   49443 kubeadm.go:322] 
	I0213 23:13:37.015393   49443 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:37.015403   49443 kubeadm.go:322] 
	I0213 23:13:37.015500   49443 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:37.015511   49443 kubeadm.go:322] 
	I0213 23:13:37.015535   49443 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:37.015603   49443 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:37.015668   49443 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:37.015677   49443 kubeadm.go:322] 
	I0213 23:13:37.015744   49443 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:37.015754   49443 kubeadm.go:322] 
	I0213 23:13:37.015814   49443 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:37.015824   49443 kubeadm.go:322] 
	I0213 23:13:37.015889   49443 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:37.015981   49443 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:37.016075   49443 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:37.016087   49443 kubeadm.go:322] 
	I0213 23:13:37.016182   49443 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:37.016272   49443 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:37.016282   49443 kubeadm.go:322] 
	I0213 23:13:37.016371   49443 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016486   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:37.016522   49443 kubeadm.go:322] 	--control-plane 
	I0213 23:13:37.016527   49443 kubeadm.go:322] 
	I0213 23:13:37.016637   49443 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:37.016643   49443 kubeadm.go:322] 
	I0213 23:13:37.016739   49443 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016875   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:37.016887   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:13:37.016895   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:37.018483   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:33.148023   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:33.648861   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.147939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.648160   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.148620   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.648710   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.148263   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.648202   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.148597   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.648067   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.019795   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:37.080689   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:37.145132   49443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:37.145273   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.145374   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=embed-certs-340656 minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.195322   49443 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:37.575387   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.075523   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.575550   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.075996   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.148294   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.648747   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.148671   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.648021   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.148566   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.648799   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.148354   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.257502   49120 kubeadm.go:1088] duration metric: took 12.504501087s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:41.257549   49120 kubeadm.go:406] StartCluster complete in 5m13.347836612s
	I0213 23:13:41.257573   49120 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.257681   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:41.260299   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.260647   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:41.260677   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:41.260755   49120 addons.go:69] Setting storage-provisioner=true in profile "no-preload-778731"
	I0213 23:13:41.260779   49120 addons.go:234] Setting addon storage-provisioner=true in "no-preload-778731"
	W0213 23:13:41.260787   49120 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:41.260777   49120 addons.go:69] Setting metrics-server=true in profile "no-preload-778731"
	I0213 23:13:41.260807   49120 addons.go:234] Setting addon metrics-server=true in "no-preload-778731"
	W0213 23:13:41.260815   49120 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:41.260840   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260858   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260882   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:13:41.261207   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261227   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261267   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261291   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261426   49120 addons.go:69] Setting default-storageclass=true in profile "no-preload-778731"
	I0213 23:13:41.261447   49120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-778731"
	I0213 23:13:41.261807   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261899   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.278449   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0213 23:13:41.278646   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0213 23:13:41.278874   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.278992   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.279367   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279389   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279460   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279485   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279748   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.279929   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.280301   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280345   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280389   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280403   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0213 23:13:41.280420   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280729   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.281302   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.281324   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.281723   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.281932   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.286017   49120 addons.go:234] Setting addon default-storageclass=true in "no-preload-778731"
	W0213 23:13:41.286039   49120 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:41.286067   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.286476   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.286511   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.299018   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0213 23:13:41.299266   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0213 23:13:41.299626   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.299951   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.300111   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300127   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300624   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300656   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300707   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.300885   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.301280   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.301628   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.303270   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.304846   49120 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:41.303809   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.306034   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:41.306048   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:41.306068   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.307731   49120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:41.309028   49120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.309045   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:41.309065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.309214   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0213 23:13:41.309635   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.309722   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310208   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.310227   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.310342   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.310379   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310514   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.310731   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.310877   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.310900   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.311093   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.311466   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.311516   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.312194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312559   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.312580   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312814   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.313006   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.313140   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.313283   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.327021   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0213 23:13:41.327605   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.328038   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.328055   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.328399   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.328596   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.330082   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.330333   49120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.330344   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:41.330356   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.333321   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333703   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.333731   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.334075   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.334494   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.334643   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.502879   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:41.534876   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:41.534908   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:41.587429   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.589619   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.616755   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:41.616783   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:41.688015   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.688039   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:41.777647   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.844418   49120 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-778731" context rescaled to 1 replicas
	I0213 23:13:41.844460   49120 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:41.847252   49120 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:41.848614   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:42.311509   49120 start.go:929] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:42.915046   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.327574246s)
	I0213 23:13:42.915112   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915127   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915219   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.325575731s)
	I0213 23:13:42.915241   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915250   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915430   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.915467   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.915475   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.915485   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915493   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917607   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917640   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917673   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917652   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917719   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917730   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917764   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.917773   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917996   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.918014   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.963310   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.963336   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.963632   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.963652   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999467   49120 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.150816624s)
	I0213 23:13:42.999513   49120 node_ready.go:35] waiting up to 6m0s for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:42.999542   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221849263s)
	I0213 23:13:42.999604   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999620   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.999914   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.999932   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999944   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999953   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:43.000322   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:43.000341   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:43.000355   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:43.000372   49120 addons.go:470] Verifying addon metrics-server=true in "no-preload-778731"
	I0213 23:13:43.003022   49120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:39.575883   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.076191   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.575969   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.075959   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.576297   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.075511   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.575528   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.076112   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.575825   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:44.076340   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.156104   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.881268834s)
	I0213 23:13:46.156183   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:46.173816   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:46.185578   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:46.196865   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:46.196911   49715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:46.251785   49715 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:46.251863   49715 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:46.416331   49715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:46.416503   49715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:46.416643   49715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:46.690351   49715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:46.692352   49715 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:46.692470   49715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:46.692583   49715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:46.692710   49715 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:46.692812   49715 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:46.692929   49715 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:46.693027   49715 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:46.693116   49715 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:46.693220   49715 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:46.693322   49715 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:46.693423   49715 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:46.693480   49715 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:46.693559   49715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:46.919270   49715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:47.096236   49715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:47.207058   49715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:47.262083   49715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:47.262614   49715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:47.265288   49715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:47.267143   49715 out.go:204]   - Booting up control plane ...
	I0213 23:13:47.267277   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:47.267383   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:47.267570   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:47.284718   49715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:47.286027   49715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:47.286152   49715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:47.443974   49715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:43.004170   49120 addons.go:505] enable addons completed in 1.743494195s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:43.030538   49120 node_ready.go:49] node "no-preload-778731" has status "Ready":"True"
	I0213 23:13:43.030566   49120 node_ready.go:38] duration metric: took 31.039482ms waiting for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:43.030581   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:43.041854   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:43.085259   49120 pod_ready.go:97] pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085310   49120 pod_ready.go:81] duration metric: took 43.414984ms waiting for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:43.085328   49120 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085337   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094656   49120 pod_ready.go:92] pod "coredns-76f75df574-f4g5w" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.094686   49120 pod_ready.go:81] duration metric: took 2.009341273s waiting for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094696   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101331   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.101352   49120 pod_ready.go:81] duration metric: took 6.650644ms waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101362   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108662   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.108686   49120 pod_ready.go:81] duration metric: took 7.317621ms waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108695   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115600   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.115620   49120 pod_ready.go:81] duration metric: took 6.918739ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115629   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403942   49120 pod_ready.go:92] pod "kube-proxy-7vcqq" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.403977   49120 pod_ready.go:81] duration metric: took 288.33703ms waiting for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403990   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804609   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.804646   49120 pod_ready.go:81] duration metric: took 400.646621ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804661   49120 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:44.575423   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.076435   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.575498   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.076393   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.575716   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.075439   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.575623   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.076149   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.575619   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.757507   49443 kubeadm.go:1088] duration metric: took 11.612278698s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:48.757567   49443 kubeadm.go:406] StartCluster complete in 5m12.504615736s
	I0213 23:13:48.757592   49443 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.757689   49443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:48.760402   49443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.760794   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:48.761145   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:13:48.761320   49443 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:48.761392   49443 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-340656"
	I0213 23:13:48.761411   49443 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-340656"
	W0213 23:13:48.761420   49443 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:48.761470   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762064   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762094   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762173   49443 addons.go:69] Setting default-storageclass=true in profile "embed-certs-340656"
	I0213 23:13:48.762208   49443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-340656"
	I0213 23:13:48.762334   49443 addons.go:69] Setting metrics-server=true in profile "embed-certs-340656"
	I0213 23:13:48.762359   49443 addons.go:234] Setting addon metrics-server=true in "embed-certs-340656"
	W0213 23:13:48.762368   49443 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:48.762418   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762605   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762642   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762770   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762812   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.782845   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0213 23:13:48.782988   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0213 23:13:48.782993   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0213 23:13:48.783453   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783578   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783583   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.784018   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784038   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784160   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784177   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784197   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784211   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784431   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784636   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.784704   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784781   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.785241   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785264   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.785910   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785952   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.795703   49443 addons.go:234] Setting addon default-storageclass=true in "embed-certs-340656"
	W0213 23:13:48.795803   49443 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:48.795847   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.796295   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.796352   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.805562   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0213 23:13:48.806234   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.815444   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0213 23:13:48.815451   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.815558   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.817565   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.817770   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.818164   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.818796   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.818815   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.819308   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0213 23:13:48.819537   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.819661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.819723   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.821798   49443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:48.820119   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.821685   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.823106   49443 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:48.823122   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:48.823142   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.824803   49443 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:48.826431   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.826467   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:48.826487   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:48.826507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.826393   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.826536   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.827054   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.827129   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.827155   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.827617   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.828067   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.828089   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.828119   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.828335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.828539   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.830417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.831572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.831604   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.832609   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.832827   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.832999   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.833165   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.851188   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0213 23:13:48.851868   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.852446   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.852482   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.852913   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.853134   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.855360   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.855766   49443 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:48.855792   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:48.855810   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.859610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.859877   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.859915   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.860263   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.860507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.860699   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.860854   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:49.015561   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:49.019336   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:49.047556   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:49.047593   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:49.083994   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:49.109749   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:49.109778   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:49.196430   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.196459   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:49.297603   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.306053   49443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-340656" context rescaled to 1 replicas
	I0213 23:13:49.306112   49443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:49.307559   49443 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:49.308883   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:51.125630   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109969214s)
	I0213 23:13:51.125663   49443 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:51.492579   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473198087s)
	I0213 23:13:51.492655   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492672   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492587   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.408541587s)
	I0213 23:13:51.492794   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492820   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493027   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493041   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493052   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493061   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493362   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493392   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493401   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493458   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493492   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493501   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493511   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493520   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493768   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493791   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.550911   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.550944   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.551267   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.551319   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.728993   49443 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.420033663s)
	I0213 23:13:51.729078   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.431431547s)
	I0213 23:13:51.729114   49443 node_ready.go:35] waiting up to 6m0s for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.729135   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729163   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729446   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729462   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729473   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729483   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729770   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.729803   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729813   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729823   49443 addons.go:470] Verifying addon metrics-server=true in "embed-certs-340656"
	I0213 23:13:51.732785   49443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:47.812862   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:49.820823   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:52.318873   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:51.733634   49443 addons.go:505] enable addons completed in 2.972313278s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:51.741252   49443 node_ready.go:49] node "embed-certs-340656" has status "Ready":"True"
	I0213 23:13:51.741279   49443 node_ready.go:38] duration metric: took 12.133263ms waiting for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.741290   49443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:51.749409   49443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766298   49443 pod_ready.go:92] pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.766331   49443 pod_ready.go:81] duration metric: took 1.01688514s waiting for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766345   49443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777697   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.777725   49443 pod_ready.go:81] duration metric: took 11.371663ms waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777738   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789006   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.789030   49443 pod_ready.go:81] duration metric: took 11.286651ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789040   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798798   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.798820   49443 pod_ready.go:81] duration metric: took 9.773358ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798829   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807522   49443 pod_ready.go:92] pod "kube-proxy-4vgt5" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:53.807555   49443 pod_ready.go:81] duration metric: took 1.00871819s waiting for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807569   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133771   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:54.133808   49443 pod_ready.go:81] duration metric: took 326.228368ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133819   49443 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:55.947176   49715 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502842 seconds
	I0213 23:13:55.947340   49715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:55.968064   49715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:56.503592   49715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:56.503798   49715 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-083863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:57.020246   49715 kubeadm.go:322] [bootstrap-token] Using token: 1sfxye.gyrkuj525fbtgg0g
	I0213 23:13:57.021591   49715 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:57.021724   49715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:57.028718   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:57.038574   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:57.046578   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:57.051622   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:57.065769   49715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:57.091404   49715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:57.330768   49715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:57.436406   49715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:57.436445   49715 kubeadm.go:322] 
	I0213 23:13:57.436542   49715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:57.436556   49715 kubeadm.go:322] 
	I0213 23:13:57.436650   49715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:57.436681   49715 kubeadm.go:322] 
	I0213 23:13:57.436729   49715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:57.436813   49715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:57.436887   49715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:57.436898   49715 kubeadm.go:322] 
	I0213 23:13:57.436989   49715 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:57.437002   49715 kubeadm.go:322] 
	I0213 23:13:57.437067   49715 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:57.437078   49715 kubeadm.go:322] 
	I0213 23:13:57.437137   49715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:57.437227   49715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:57.437344   49715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:57.437365   49715 kubeadm.go:322] 
	I0213 23:13:57.437463   49715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:57.437561   49715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:57.437577   49715 kubeadm.go:322] 
	I0213 23:13:57.437713   49715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.437878   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:57.437915   49715 kubeadm.go:322] 	--control-plane 
	I0213 23:13:57.437925   49715 kubeadm.go:322] 
	I0213 23:13:57.438021   49715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:57.438032   49715 kubeadm.go:322] 
	I0213 23:13:57.438140   49715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.438284   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:57.438602   49715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:57.438886   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:13:57.438904   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:57.440968   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:57.442459   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:57.466652   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:57.538217   49715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:57.538279   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:57.538289   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=default-k8s-diff-port-083863 minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:54.320129   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.812983   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.141892   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:58.143201   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:57.914767   49715 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:57.914957   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.415274   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.915866   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.415351   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.915329   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.415646   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.915129   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.415803   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.915716   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:02.415378   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.815013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:01.312236   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:00.645227   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:03.145517   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:02.915447   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.415367   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.915183   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.416047   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.915850   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.415867   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.915570   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.415580   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.915010   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:07.415431   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.314560   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.817591   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.642499   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.644055   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.916067   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.415001   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.915359   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.415672   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.915997   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:10.105267   49715 kubeadm.go:1088] duration metric: took 12.567044904s to wait for elevateKubeSystemPrivileges.
	I0213 23:14:10.105293   49715 kubeadm.go:406] StartCluster complete in 5m12.839656692s
	I0213 23:14:10.105310   49715 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.105392   49715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:14:10.107335   49715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.107629   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:14:10.107747   49715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:14:10.107821   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:14:10.107841   49715 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107858   49715 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107866   49715 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-083863"
	I0213 23:14:10.107873   49715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-083863"
	W0213 23:14:10.107878   49715 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:14:10.107885   49715 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107905   49715 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.107917   49715 addons.go:243] addon metrics-server should already be in state true
	I0213 23:14:10.107941   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.107961   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.108282   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108352   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108368   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108382   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108392   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108355   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.124618   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0213 23:14:10.124636   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0213 23:14:10.125154   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125261   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125984   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.125990   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.126014   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126029   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126422   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126501   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126604   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.127038   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.127067   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131142   49715 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.131168   49715 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:14:10.131196   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.131628   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.131661   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131866   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0213 23:14:10.132342   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.133024   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.133044   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.133539   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.134069   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.134119   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.145244   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0213 23:14:10.145674   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.146213   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.146233   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.146642   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.146845   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.148779   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.151227   49715 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:14:10.152983   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:14:10.153004   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:14:10.150602   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0213 23:14:10.153029   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.154229   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.154857   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.154876   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.155560   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.156429   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.156476   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.156757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.157450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157680   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.157898   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.158068   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.158211   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.159437   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0213 23:14:10.159780   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.160316   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.160328   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.160712   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.160874   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.163133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.166002   49715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:14:10.168221   49715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.168239   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:14:10.168259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.172119   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172539   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.172562   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172800   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.173447   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.173609   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.173769   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.175322   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0213 23:14:10.175719   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.176212   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.176223   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.176556   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.176727   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.178938   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.179149   49715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.179163   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:14:10.179174   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.182253   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.182739   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.182773   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.183106   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.183259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.183425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.183534   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.327834   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:14:10.327857   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:14:10.362507   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.405623   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:14:10.405655   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:14:10.413284   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.427964   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:14:10.459317   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.459343   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:14:10.552860   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.687588   49715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-083863" context rescaled to 1 replicas
	I0213 23:14:10.687640   49715 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:14:10.689888   49715 out.go:177] * Verifying Kubernetes components...
	I0213 23:14:10.691656   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:14:08.312251   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:10.313161   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.313239   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.671905   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.309368382s)
	I0213 23:14:12.671963   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.258642736s)
	I0213 23:14:12.671974   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.671999   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672008   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244007691s)
	I0213 23:14:12.672048   49715 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 23:14:12.672013   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672319   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672358   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672414   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672428   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672440   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672391   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672502   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672511   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672522   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672672   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672713   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672825   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672842   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672845   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.718598   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.718635   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.718899   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.718948   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.718957   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992151   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.439242656s)
	I0213 23:14:12.992169   49715 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.300483548s)
	I0213 23:14:12.992204   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992208   49715 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:12.992219   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.992608   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.992650   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.992674   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992694   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992706   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.993012   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.993033   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.993082   49715 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-083863"
	I0213 23:14:12.994959   49715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:14:10.144369   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.642284   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.996304   49715 addons.go:505] enable addons completed in 2.888556474s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:14:13.017331   49715 node_ready.go:49] node "default-k8s-diff-port-083863" has status "Ready":"True"
	I0213 23:14:13.017356   49715 node_ready.go:38] duration metric: took 25.135832ms waiting for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:13.017369   49715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:14:13.040090   49715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047064   49715 pod_ready.go:92] pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.047105   49715 pod_ready.go:81] duration metric: took 2.006967952s waiting for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047119   49715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052773   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.052793   49715 pod_ready.go:81] duration metric: took 5.668033ms waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052801   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.057989   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.058012   49715 pod_ready.go:81] duration metric: took 5.204253ms waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.058024   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063408   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.063426   49715 pod_ready.go:81] duration metric: took 5.394681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063434   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068502   49715 pod_ready.go:92] pod "kube-proxy-kvz2b" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.068523   49715 pod_ready.go:81] duration metric: took 5.082168ms waiting for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068534   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445109   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.445132   49715 pod_ready.go:81] duration metric: took 376.590631ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445142   49715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:17.453588   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:14.816746   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.313290   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:15.141901   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.641098   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.453805   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.954116   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.812763   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.814338   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.641389   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.641735   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.142168   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.455003   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.952168   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.312468   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.813420   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.641722   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.141082   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:28.954054   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:30.954647   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.311343   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.312249   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.143011   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.642102   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.452218   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.453522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.457001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.314313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.812309   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:36.143532   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:38.640894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:39.955206   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.456339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.813776   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.314111   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.642572   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:43.141919   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:44.955150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.454324   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.813470   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.313382   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.143485   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.641760   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.954167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.453822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.814576   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:50.312600   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.313062   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.642698   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.141500   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.141646   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.454979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.953279   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.812403   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.813413   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.142104   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:58.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.453692   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.952522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.313705   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.813002   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:00.642441   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:02.644754   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.954032   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.453202   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.813780   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.312152   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:04.645545   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:07.142188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.454411   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:10.953929   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.813133   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.315282   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:09.641331   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.644066   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:14.141197   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.452937   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:15.453227   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:17.455142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.814488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.312013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.142256   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:19.956449   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.454447   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.313100   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.315124   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.642516   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:23.141725   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.955277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:26.956469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.813277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.813332   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.313503   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:25.148206   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.642527   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.453659   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:31.953193   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.812921   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.311859   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.642812   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.141177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.141385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.452179   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.454250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.312263   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.812360   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.642681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.142639   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:38.952639   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:40.953841   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.311603   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.312975   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.640004   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.641689   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:42.954046   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.453175   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.812207   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:46.313761   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.642354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.141466   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:47.953013   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.455958   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.813689   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:51.312695   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.144359   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.145852   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.952203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.960421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.455215   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:53.312858   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:55.313197   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.313493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.642775   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.142159   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.143780   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.953718   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.954907   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.813086   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:02.313743   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.640609   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:03.641712   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.453269   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:06.454001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.813366   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.313460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:05.642520   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.644309   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:08.454568   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.953538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:09.315454   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:11.814145   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.142385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.644175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.953619   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.452015   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.455884   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:14.311599   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:16.312822   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.143506   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.643647   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:19.952742   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:21.953464   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:18.314298   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.812863   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.142175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:22.641953   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.953599   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.953715   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.312368   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.813170   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:24.642939   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:27.143008   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.452587   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.454360   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.314038   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.812058   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:29.642029   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.141959   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.142628   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.955547   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:35.453428   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.456558   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.813040   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.813607   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.314673   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:36.143091   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:38.147685   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.953073   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:42.452724   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.811843   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:41.811877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:40.645177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.140828   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:44.453277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.453393   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.813703   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.312231   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:45.141859   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:47.142843   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.453508   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.456357   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.312293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.812918   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:49.641676   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.142518   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.951784   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.954108   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.455497   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:53.312477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:55.313195   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.642918   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.141241   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.141855   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.954832   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.455675   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.811554   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.813709   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.313752   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:01.142778   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:03.143196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.953816   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.953967   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.812917   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.814681   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:05.644404   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:07.644824   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.455392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.953935   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.312828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.811876   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:10.141985   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:12.642984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.453572   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.454161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.314828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.813786   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:15.143013   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:17.143864   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.144089   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:18.952608   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:20.952810   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.312837   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.316700   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.641354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:24.142975   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:22.953607   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.453091   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.454501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:23.811674   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.814225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:26.640796   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:28.642684   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:29.952519   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.453137   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.816563   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.314052   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.642932   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:33.142380   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.456778   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.459583   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.812724   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.812895   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.813814   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:35.641888   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.144690   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.952822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.956268   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.821433   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:41.313306   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.641240   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:42.641667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.453378   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.953398   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.313457   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812519   49120 pod_ready.go:81] duration metric: took 4m0.007851911s waiting for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:45.812528   49120 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:45.812534   49120 pod_ready.go:38] duration metric: took 4m2.781943239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:45.812548   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:45.812574   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:45.812640   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:45.881239   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:45.881267   49120 cri.go:89] found id: ""
	I0213 23:17:45.881277   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:45.881327   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.886446   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:45.886531   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:45.926920   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:45.926947   49120 cri.go:89] found id: ""
	I0213 23:17:45.926955   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:45.927000   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.931500   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:45.931577   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:45.979081   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:45.979109   49120 cri.go:89] found id: ""
	I0213 23:17:45.979119   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:45.979174   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.984481   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:45.984539   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:46.035365   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.035385   49120 cri.go:89] found id: ""
	I0213 23:17:46.035392   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:46.035438   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.039634   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:46.039695   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:46.087404   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:46.087429   49120 cri.go:89] found id: ""
	I0213 23:17:46.087436   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:46.087490   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.091828   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:46.091889   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:46.133625   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:46.133651   49120 cri.go:89] found id: ""
	I0213 23:17:46.133658   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:46.133710   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.138378   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:46.138456   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:46.181018   49120 cri.go:89] found id: ""
	I0213 23:17:46.181048   49120 logs.go:276] 0 containers: []
	W0213 23:17:46.181058   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:46.181065   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:46.181141   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:46.221347   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.221374   49120 cri.go:89] found id: ""
	I0213 23:17:46.221385   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:46.221448   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.226298   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:46.226331   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:46.268881   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:46.268915   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.325183   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:46.325225   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.372600   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:46.372637   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:46.791381   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:46.791438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:46.861239   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:46.861431   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:46.884969   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:46.885009   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:46.909324   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:46.909352   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:46.966664   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:46.966698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:47.030276   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:47.030321   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:47.081480   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:47.081516   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:47.238201   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:47.238238   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:47.285995   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:47.286033   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:47.332459   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332486   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:47.332566   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:47.332580   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:47.332596   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:47.332616   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332622   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:44.643384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.141032   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.953650   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:50.453421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.453501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:49.641373   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.142827   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:54.141398   49443 pod_ready.go:81] duration metric: took 4m0.007567399s waiting for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:54.141420   49443 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:54.141428   49443 pod_ready.go:38] duration metric: took 4m2.400127673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:54.141441   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:54.141464   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:54.141506   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:54.203295   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:54.203319   49443 cri.go:89] found id: ""
	I0213 23:17:54.203329   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:54.203387   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.208671   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:54.208748   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:54.254150   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:54.254183   49443 cri.go:89] found id: ""
	I0213 23:17:54.254193   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:54.254259   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.259090   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:54.259178   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:54.309365   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:54.309385   49443 cri.go:89] found id: ""
	I0213 23:17:54.309392   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:54.309436   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.315937   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:54.316014   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:54.363796   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.363855   49443 cri.go:89] found id: ""
	I0213 23:17:54.363866   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:54.363926   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.368767   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:54.368842   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:54.417590   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:54.417620   49443 cri.go:89] found id: ""
	I0213 23:17:54.417637   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:54.417696   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.422980   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:54.423053   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:54.468990   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.469019   49443 cri.go:89] found id: ""
	I0213 23:17:54.469029   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:54.469094   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.473989   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:54.474073   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:54.524124   49443 cri.go:89] found id: ""
	I0213 23:17:54.524154   49443 logs.go:276] 0 containers: []
	W0213 23:17:54.524164   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:54.524172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:54.524239   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.953845   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.459517   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.333824   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:57.351216   49120 api_server.go:72] duration metric: took 4m15.50672707s to wait for apiserver process to appear ...
	I0213 23:17:57.351245   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:57.351281   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:57.351340   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:57.405928   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:57.405956   49120 cri.go:89] found id: ""
	I0213 23:17:57.405963   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:57.406007   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.410541   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:57.410619   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:57.456843   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:57.456871   49120 cri.go:89] found id: ""
	I0213 23:17:57.456881   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:57.456940   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.461801   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:57.461852   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:57.504653   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.504690   49120 cri.go:89] found id: ""
	I0213 23:17:57.504702   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:57.504762   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.509177   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:57.509250   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:57.556672   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:57.556696   49120 cri.go:89] found id: ""
	I0213 23:17:57.556704   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:57.556747   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.561343   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:57.561399   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:57.606959   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:57.606994   49120 cri.go:89] found id: ""
	I0213 23:17:57.607005   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:57.607068   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.611356   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:57.611440   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:57.655205   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:57.655230   49120 cri.go:89] found id: ""
	I0213 23:17:57.655238   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:57.655284   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.659762   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:57.659850   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:57.699989   49120 cri.go:89] found id: ""
	I0213 23:17:57.700012   49120 logs.go:276] 0 containers: []
	W0213 23:17:57.700019   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:57.700028   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:57.700075   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.562654   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.562674   49443 cri.go:89] found id: ""
	I0213 23:17:54.562682   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:54.562745   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.567182   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:54.567209   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:54.666809   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:54.666847   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:54.818292   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:54.818324   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.878074   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:54.878108   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.938472   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:54.938509   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.985201   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:54.985235   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:54.999987   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:55.000016   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:55.058536   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:55.058573   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:55.108130   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:55.108172   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:55.154299   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:55.154327   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:55.205554   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:55.205583   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:55.615944   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:55.615987   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.179069   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:58.194968   49443 api_server.go:72] duration metric: took 4m8.888826635s to wait for apiserver process to appear ...
	I0213 23:17:58.194992   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:58.195020   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:58.195067   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:58.245997   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.246029   49443 cri.go:89] found id: ""
	I0213 23:17:58.246038   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:58.246103   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.251486   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:58.251566   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:58.299878   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:58.299909   49443 cri.go:89] found id: ""
	I0213 23:17:58.299919   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:58.299977   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.305075   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:58.305139   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:58.352587   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:58.352617   49443 cri.go:89] found id: ""
	I0213 23:17:58.352628   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:58.352688   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.357493   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:58.357576   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:58.412181   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.412203   49443 cri.go:89] found id: ""
	I0213 23:17:58.412211   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:58.412265   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.418852   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:58.418931   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:58.470881   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.470907   49443 cri.go:89] found id: ""
	I0213 23:17:58.470916   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:58.470970   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.476768   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:58.476851   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:58.548272   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:58.548293   49443 cri.go:89] found id: ""
	I0213 23:17:58.548301   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:58.548357   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.553380   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:58.553452   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:58.599623   49443 cri.go:89] found id: ""
	I0213 23:17:58.599652   49443 logs.go:276] 0 containers: []
	W0213 23:17:58.599663   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:58.599669   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:58.599725   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:58.647872   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.647896   49443 cri.go:89] found id: ""
	I0213 23:17:58.647906   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:58.647966   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.653015   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:58.653041   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.707958   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:58.708000   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.759975   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:58.760015   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.814801   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:58.814833   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.853782   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.853814   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:59.217806   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:59.217854   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:59.278255   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:59.278294   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:59.385496   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:59.385537   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:59.953729   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:02.454016   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.740739   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:57.740774   49120 cri.go:89] found id: ""
	I0213 23:17:57.740785   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:57.740839   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.745140   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:57.745163   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:57.758556   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:57.758604   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:57.900468   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:57.900507   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.945665   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:57.945693   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:58.003484   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:58.003521   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:58.048797   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:58.048826   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.096309   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:58.096347   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:58.173795   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.173990   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.196277   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:58.196306   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:58.266087   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:58.266129   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:58.325638   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:58.325676   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:58.372711   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:58.372752   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:58.444057   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.444097   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:58.830470   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830511   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:58.830572   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:58.830591   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.830600   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.830610   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830618   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:59.544056   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:59.544517   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:59.607033   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:59.607067   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:59.654534   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:59.654584   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:59.719274   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:59.719309   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:02.234489   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:18:02.240412   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:18:02.241675   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:02.241699   49443 api_server.go:131] duration metric: took 4.046700263s to wait for apiserver health ...
	I0213 23:18:02.241710   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:02.241735   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:02.241796   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:02.289133   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:02.289158   49443 cri.go:89] found id: ""
	I0213 23:18:02.289166   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:18:02.289212   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.295450   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:02.295527   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:02.342262   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:02.342285   49443 cri.go:89] found id: ""
	I0213 23:18:02.342292   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:18:02.342337   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.346810   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:02.346874   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:02.385638   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:02.385665   49443 cri.go:89] found id: ""
	I0213 23:18:02.385673   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:18:02.385725   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.389834   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:02.389920   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:02.435078   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:02.435110   49443 cri.go:89] found id: ""
	I0213 23:18:02.435121   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:18:02.435184   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.440237   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:02.440297   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:02.483869   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.483891   49443 cri.go:89] found id: ""
	I0213 23:18:02.483899   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:18:02.483942   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.490454   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:02.490532   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:02.540971   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:02.541000   49443 cri.go:89] found id: ""
	I0213 23:18:02.541010   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:18:02.541069   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.545818   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:02.545906   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:02.593132   49443 cri.go:89] found id: ""
	I0213 23:18:02.593159   49443 logs.go:276] 0 containers: []
	W0213 23:18:02.593166   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:02.593172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:02.593222   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:02.634979   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.635015   49443 cri.go:89] found id: ""
	I0213 23:18:02.635028   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:18:02.635089   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.640246   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:18:02.640274   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.681426   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:18:02.681458   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.721033   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:02.721062   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:03.049340   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:03.049385   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:18:03.154378   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:18:03.154417   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:03.215045   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:18:03.215081   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:03.260291   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:18:03.260320   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:03.323526   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:18:03.323565   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:03.378686   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:03.378731   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:03.406717   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:03.406742   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:03.547999   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:18:03.548035   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:03.593226   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:18:03.593255   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:06.160914   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:06.160954   49443 system_pods.go:61] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.160963   49443 system_pods.go:61] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.160970   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.160977   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.160996   49443 system_pods.go:61] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.161008   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.161018   49443 system_pods.go:61] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.161025   49443 system_pods.go:61] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.161035   49443 system_pods.go:74] duration metric: took 3.919318115s to wait for pod list to return data ...
	I0213 23:18:06.161046   49443 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:06.165231   49443 default_sa.go:45] found service account: "default"
	I0213 23:18:06.165262   49443 default_sa.go:55] duration metric: took 4.207834ms for default service account to be created ...
	I0213 23:18:06.165271   49443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:06.172453   49443 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:06.172488   49443 system_pods.go:89] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.172494   49443 system_pods.go:89] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.172499   49443 system_pods.go:89] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.172503   49443 system_pods.go:89] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.172507   49443 system_pods.go:89] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.172512   49443 system_pods.go:89] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.172517   49443 system_pods.go:89] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.172522   49443 system_pods.go:89] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.172531   49443 system_pods.go:126] duration metric: took 7.254871ms to wait for k8s-apps to be running ...
	I0213 23:18:06.172541   49443 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:06.172598   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:06.193026   49443 system_svc.go:56] duration metric: took 20.479072ms WaitForService to wait for kubelet.
	I0213 23:18:06.193051   49443 kubeadm.go:581] duration metric: took 4m16.886913912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:06.193072   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:06.196910   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:06.196940   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:06.196951   49443 node_conditions.go:105] duration metric: took 3.874223ms to run NodePressure ...
	I0213 23:18:06.196962   49443 start.go:228] waiting for startup goroutines ...
	I0213 23:18:06.196968   49443 start.go:233] waiting for cluster config update ...
	I0213 23:18:06.196977   49443 start.go:242] writing updated cluster config ...
	I0213 23:18:06.197233   49443 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:06.248295   49443 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:06.250392   49443 out.go:177] * Done! kubectl is now configured to use "embed-certs-340656" cluster and "default" namespace by default
	I0213 23:18:04.455358   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:06.953191   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.954115   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:10.954853   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.832437   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:18:08.838687   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:18:08.839999   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:18:08.840021   49120 api_server.go:131] duration metric: took 11.488768389s to wait for apiserver health ...
	I0213 23:18:08.840031   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:08.840058   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:08.840122   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:08.891532   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:08.891559   49120 cri.go:89] found id: ""
	I0213 23:18:08.891567   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:18:08.891618   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.896712   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:08.896802   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:08.943555   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:08.943584   49120 cri.go:89] found id: ""
	I0213 23:18:08.943593   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:18:08.943654   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.948658   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:08.948730   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:08.995867   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:08.995896   49120 cri.go:89] found id: ""
	I0213 23:18:08.995905   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:18:08.995970   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.000810   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:09.000883   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:09.046606   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.046636   49120 cri.go:89] found id: ""
	I0213 23:18:09.046646   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:18:09.046706   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.050924   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:09.050986   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:09.097414   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.097445   49120 cri.go:89] found id: ""
	I0213 23:18:09.097456   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:18:09.097525   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.102101   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:09.102177   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:09.164244   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.164267   49120 cri.go:89] found id: ""
	I0213 23:18:09.164274   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:18:09.164323   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.169164   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:09.169238   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:09.217068   49120 cri.go:89] found id: ""
	I0213 23:18:09.217094   49120 logs.go:276] 0 containers: []
	W0213 23:18:09.217101   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:09.217106   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:09.217174   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:09.256986   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.257017   49120 cri.go:89] found id: ""
	I0213 23:18:09.257028   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:18:09.257088   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.261602   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:18:09.261625   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.314910   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:18:09.314957   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.361576   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:18:09.361609   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.433243   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:18:09.433281   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.485648   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:09.485698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:09.634091   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:18:09.634127   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:09.681649   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:18:09.681689   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:09.729410   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:09.729449   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:10.100058   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:18:10.100104   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:10.156178   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:10.156209   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:10.229188   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.229358   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.251947   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:10.251987   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:10.268224   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:18:10.268251   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:10.319580   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319608   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:10.319651   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:18:10.319663   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.319673   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.319685   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319696   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:13.453597   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:15.445609   49715 pod_ready.go:81] duration metric: took 4m0.000451749s waiting for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	E0213 23:18:15.445643   49715 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:18:15.445653   49715 pod_ready.go:38] duration metric: took 4m2.428270702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:18:15.445670   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:18:15.445716   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:15.445773   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:15.501757   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:15.501791   49715 cri.go:89] found id: ""
	I0213 23:18:15.501802   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:15.501863   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.507658   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:15.507738   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:15.552164   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:15.552197   49715 cri.go:89] found id: ""
	I0213 23:18:15.552204   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:15.552257   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.557704   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:15.557764   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:15.606147   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:15.606168   49715 cri.go:89] found id: ""
	I0213 23:18:15.606175   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:15.606231   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.610863   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:15.610939   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:15.655298   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:15.655320   49715 cri.go:89] found id: ""
	I0213 23:18:15.655329   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:15.655387   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.660000   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:15.660062   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:15.699700   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:15.699735   49715 cri.go:89] found id: ""
	I0213 23:18:15.699745   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:15.699815   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.704535   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:15.704614   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:15.746999   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:15.747028   49715 cri.go:89] found id: ""
	I0213 23:18:15.747038   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:15.747091   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.752065   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:15.752137   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:15.793372   49715 cri.go:89] found id: ""
	I0213 23:18:15.793404   49715 logs.go:276] 0 containers: []
	W0213 23:18:15.793415   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:15.793422   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:15.793487   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:15.839630   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:15.839660   49715 cri.go:89] found id: ""
	I0213 23:18:15.839668   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:15.839723   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.844199   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:15.844225   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:15.904450   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:15.904479   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:15.925777   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:15.925805   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:16.079602   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:16.079634   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:16.121369   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:16.121400   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:16.174404   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:16.174440   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:16.216286   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:16.216321   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:16.629527   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:16.629564   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:16.708003   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.708235   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.729748   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:16.729784   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:16.784398   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:16.784432   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:16.829885   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:16.829923   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:16.872036   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:16.872066   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:16.937327   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937359   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:16.937411   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:16.937421   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.937431   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.937441   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937449   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:20.329462   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:20.329500   49120 system_pods.go:61] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.329508   49120 system_pods.go:61] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.329515   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.329521   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.329527   49120 system_pods.go:61] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.329533   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.329543   49120 system_pods.go:61] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.329550   49120 system_pods.go:61] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.329560   49120 system_pods.go:74] duration metric: took 11.489522059s to wait for pod list to return data ...
	I0213 23:18:20.329569   49120 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:20.332784   49120 default_sa.go:45] found service account: "default"
	I0213 23:18:20.332809   49120 default_sa.go:55] duration metric: took 3.233136ms for default service account to be created ...
	I0213 23:18:20.332817   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:20.339002   49120 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:20.339033   49120 system_pods.go:89] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.339042   49120 system_pods.go:89] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.339049   49120 system_pods.go:89] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.339056   49120 system_pods.go:89] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.339063   49120 system_pods.go:89] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.339070   49120 system_pods.go:89] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.339084   49120 system_pods.go:89] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.339093   49120 system_pods.go:89] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.339116   49120 system_pods.go:126] duration metric: took 6.292649ms to wait for k8s-apps to be running ...
	I0213 23:18:20.339125   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:20.339183   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:20.354459   49120 system_svc.go:56] duration metric: took 15.325743ms WaitForService to wait for kubelet.
	I0213 23:18:20.354488   49120 kubeadm.go:581] duration metric: took 4m38.510005999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:20.354505   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:20.358160   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:20.358186   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:20.358195   49120 node_conditions.go:105] duration metric: took 3.685402ms to run NodePressure ...
	I0213 23:18:20.358205   49120 start.go:228] waiting for startup goroutines ...
	I0213 23:18:20.358211   49120 start.go:233] waiting for cluster config update ...
	I0213 23:18:20.358220   49120 start.go:242] writing updated cluster config ...
	I0213 23:18:20.358527   49120 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:20.409811   49120 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 23:18:20.412251   49120 out.go:177] * Done! kubectl is now configured to use "no-preload-778731" cluster and "default" namespace by default
	I0213 23:18:26.939087   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:18:26.956231   49715 api_server.go:72] duration metric: took 4m16.268553955s to wait for apiserver process to appear ...
	I0213 23:18:26.956259   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:18:26.956317   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:26.956382   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:27.006428   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.006455   49715 cri.go:89] found id: ""
	I0213 23:18:27.006465   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:27.006527   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.011468   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:27.011542   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:27.054309   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.054334   49715 cri.go:89] found id: ""
	I0213 23:18:27.054344   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:27.054393   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.058925   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:27.058979   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:27.101942   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.101971   49715 cri.go:89] found id: ""
	I0213 23:18:27.101981   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:27.102041   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.107540   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:27.107609   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:27.152126   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.152150   49715 cri.go:89] found id: ""
	I0213 23:18:27.152157   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:27.152203   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.156537   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:27.156608   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:27.202931   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:27.202952   49715 cri.go:89] found id: ""
	I0213 23:18:27.202959   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:27.203006   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.209339   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:27.209405   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:27.250771   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:27.250814   49715 cri.go:89] found id: ""
	I0213 23:18:27.250828   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:27.250898   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.255547   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:27.255621   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:27.297645   49715 cri.go:89] found id: ""
	I0213 23:18:27.297679   49715 logs.go:276] 0 containers: []
	W0213 23:18:27.297689   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:27.297697   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:27.297765   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:27.340690   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.340719   49715 cri.go:89] found id: ""
	I0213 23:18:27.340728   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:27.340786   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.345308   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:27.345338   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:27.481620   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:27.481653   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.541421   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:27.541456   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.594527   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:27.594559   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.657323   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:27.657358   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.710198   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:27.710234   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.750419   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:27.750451   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:28.148333   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:28.148374   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:28.162927   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:28.162959   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:28.214802   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:28.214835   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:28.264035   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:28.264061   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:28.328849   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:28.328888   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:28.408683   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.408859   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429691   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429721   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:28.429772   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:28.429780   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.429787   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429793   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429798   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:38.431065   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:18:38.438496   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:18:38.440109   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:38.440131   49715 api_server.go:131] duration metric: took 11.483865303s to wait for apiserver health ...
	I0213 23:18:38.440139   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:38.440163   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:38.440218   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:38.485767   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:38.485791   49715 cri.go:89] found id: ""
	I0213 23:18:38.485798   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:38.485847   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.490804   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:38.490876   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:38.540174   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:38.540196   49715 cri.go:89] found id: ""
	I0213 23:18:38.540203   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:38.540247   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.545816   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:38.545904   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:38.593443   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:38.593466   49715 cri.go:89] found id: ""
	I0213 23:18:38.593474   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:38.593531   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.598567   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:38.598642   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:38.646508   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:38.646539   49715 cri.go:89] found id: ""
	I0213 23:18:38.646549   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:38.646605   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.651425   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:38.651489   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:38.695133   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:38.695157   49715 cri.go:89] found id: ""
	I0213 23:18:38.695166   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:38.695226   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.700446   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:38.700504   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:38.748214   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.748251   49715 cri.go:89] found id: ""
	I0213 23:18:38.748261   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:38.748319   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.753466   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:38.753532   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:38.796480   49715 cri.go:89] found id: ""
	I0213 23:18:38.796505   49715 logs.go:276] 0 containers: []
	W0213 23:18:38.796514   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:38.796521   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:38.796597   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:38.838145   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.838189   49715 cri.go:89] found id: ""
	I0213 23:18:38.838199   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:38.838259   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.844252   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:38.844279   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.919402   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:38.919442   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.963733   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:38.963767   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:39.013301   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:39.013336   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:39.142161   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:39.142192   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:39.199423   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:39.199455   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:39.245639   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:39.245669   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:39.290916   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:39.290954   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:39.343373   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:39.343405   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:39.700393   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:39.700441   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:39.777386   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.777564   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.800035   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:39.800087   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:39.817941   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:39.817972   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:39.870635   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870675   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:39.870733   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:39.870744   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.870749   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.870756   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870764   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:49.878184   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:49.878220   49715 system_pods.go:61] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.878229   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.878237   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.878244   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.878250   49715 system_pods.go:61] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.878256   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.878268   49715 system_pods.go:61] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.878276   49715 system_pods.go:61] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.878284   49715 system_pods.go:74] duration metric: took 11.438139039s to wait for pod list to return data ...
	I0213 23:18:49.878294   49715 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:49.881702   49715 default_sa.go:45] found service account: "default"
	I0213 23:18:49.881730   49715 default_sa.go:55] duration metric: took 3.42943ms for default service account to be created ...
	I0213 23:18:49.881741   49715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:49.888356   49715 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:49.888380   49715 system_pods.go:89] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.888385   49715 system_pods.go:89] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.888392   49715 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.888397   49715 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.888403   49715 system_pods.go:89] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.888409   49715 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.888422   49715 system_pods.go:89] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.888434   49715 system_pods.go:89] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.888446   49715 system_pods.go:126] duration metric: took 6.698139ms to wait for k8s-apps to be running ...
	I0213 23:18:49.888456   49715 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:49.888497   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:49.905396   49715 system_svc.go:56] duration metric: took 16.928016ms WaitForService to wait for kubelet.
	I0213 23:18:49.905427   49715 kubeadm.go:581] duration metric: took 4m39.217754888s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:49.905452   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:49.909261   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:49.909296   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:49.909312   49715 node_conditions.go:105] duration metric: took 3.854435ms to run NodePressure ...
	I0213 23:18:49.909326   49715 start.go:228] waiting for startup goroutines ...
	I0213 23:18:49.909334   49715 start.go:233] waiting for cluster config update ...
	I0213 23:18:49.909347   49715 start.go:242] writing updated cluster config ...
	I0213 23:18:49.909654   49715 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:49.961022   49715 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:49.963033   49715 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-083863" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:08:00 UTC, ends at Tue 2024-02-13 23:27:22 UTC. --
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.288498710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866842288477241,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=1c421616-5488-4ee9-87b3-db055eaab9fd name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.289369895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb4ce584-9f9f-4460-bc3d-c5384ee28b37 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.289417404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb4ce584-9f9f-4460-bc3d-c5384ee28b37 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.289593634Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb4ce584-9f9f-4460-bc3d-c5384ee28b37 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.330203458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6c9e1016-0d07-4729-9a80-02dd0c057c74 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.330263547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6c9e1016-0d07-4729-9a80-02dd0c057c74 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.332022074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=19c74f15-fef1-496b-b2d2-4c99aa4898ee name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.332342327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866842332328862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=19c74f15-fef1-496b-b2d2-4c99aa4898ee name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.333054987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6a9bab1f-1f47-4abd-89f3-e4e664d9a53e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.333100857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6a9bab1f-1f47-4abd-89f3-e4e664d9a53e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.333277088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6a9bab1f-1f47-4abd-89f3-e4e664d9a53e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.375491817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b2b740a8-f17e-4b2c-8cf1-f20fda45784b name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.375550527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b2b740a8-f17e-4b2c-8cf1-f20fda45784b name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.377601002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d220e52a-38ca-4dce-b8d2-4c544c119bf6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.378042257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866842378028314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d220e52a-38ca-4dce-b8d2-4c544c119bf6 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.378857386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9352498d-6470-48be-b3e6-2669b4c72001 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.378940219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9352498d-6470-48be-b3e6-2669b4c72001 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.379134463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9352498d-6470-48be-b3e6-2669b4c72001 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.435820040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=50a32da1-3f03-4e01-84d0-4520914d4c7c name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.435960751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=50a32da1-3f03-4e01-84d0-4520914d4c7c name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.438651912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bebf9823-58dd-484a-b16c-4a3c1ac4177e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.439083855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866842439069701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=bebf9823-58dd-484a-b16c-4a3c1ac4177e name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.443571415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fed029fa-e732-4015-88ec-53cdd7b2490f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.444048116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fed029fa-e732-4015-88ec-53cdd7b2490f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:22 no-preload-778731 crio[728]: time="2024-02-13 23:27:22.444618405Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fed029fa-e732-4015-88ec-53cdd7b2490f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	032daf7e93d06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   6a68248c9129a       storage-provisioner
	bb7a89704fa24       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   451b01d1f17f7       coredns-76f75df574-f4g5w
	6b12d9bbcaaf9       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   22add25108920       kube-proxy-7vcqq
	75e6b925f0095       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   fe5100663112a       etcd-no-preload-778731
	f193476ba382c       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   f8a4c800f4dbe       kube-scheduler-no-preload-778731
	a14e489a0cbc6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   23451f1f071ca       kube-apiserver-no-preload-778731
	1bbf42830ebf1       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   1f1128048efed       kube-controller-manager-no-preload-778731
	
	
	==> coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55809 - 17046 "HINFO IN 6020288737557843742.2579504881925505847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009476941s
	
	
	==> describe nodes <==
	Name:               no-preload-778731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-778731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=no-preload-778731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:13:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-778731
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:27:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.31
	  Hostname:    no-preload-778731
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ad4a7d6c34b4e29947628a783208913
	  System UUID:                5ad4a7d6-c34b-4e29-9476-28a783208913
	  Boot ID:                    945b7cbf-253c-4566-ad72-aa54f0f30632
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-f4g5w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-778731                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-778731             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-778731    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-7vcqq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-778731             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-mt6qd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x9 over 14m)  kubelet          Node no-preload-778731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x7 over 14m)  kubelet          Node no-preload-778731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-778731 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-778731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-778731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-778731 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node no-preload-778731 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeReady                13m                kubelet          Node no-preload-778731 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-778731 event: Registered Node no-preload-778731 in Controller
	
	
	==> dmesg <==
	[Feb13 23:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.408562] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.388611] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[Feb13 23:08] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.590058] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.729248] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.116953] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.176176] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.136905] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.261295] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +29.283296] systemd-fstab-generator[1340]: Ignoring "noauto" for root device
	[ +19.333544] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 23:09] hrtimer: interrupt took 2796717 ns
	[Feb13 23:13] systemd-fstab-generator[3985]: Ignoring "noauto" for root device
	[ +10.320444] systemd-fstab-generator[4316]: Ignoring "noauto" for root device
	[ +14.765805] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] <==
	{"level":"info","ts":"2024-02-13T23:13:22.287588Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.31:2380"}
	{"level":"info","ts":"2024-02-13T23:13:22.287772Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.31:2380"}
	{"level":"info","ts":"2024-02-13T23:13:22.290299Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"1a7f054d9a9436d0","initial-advertise-peer-urls":["https://192.168.83.31:2380"],"listen-peer-urls":["https://192.168.83.31:2380"],"advertise-client-urls":["https://192.168.83.31:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.31:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-13T23:13:22.290388Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T23:13:23.153892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:23.153997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:23.154045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 received MsgPreVoteResp from 1a7f054d9a9436d0 at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:23.154084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.154109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 received MsgVoteResp from 1a7f054d9a9436d0 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.154136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.154161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a7f054d9a9436d0 elected leader 1a7f054d9a9436d0 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.155912Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157291Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a7f054d9a9436d0","local-member-attributes":"{Name:no-preload-778731 ClientURLs:[https://192.168.83.31:2379]}","request-path":"/0/members/1a7f054d9a9436d0/attributes","cluster-id":"bdb46277f8bc3ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:13:23.157389Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:23.157801Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bdb46277f8bc3ba","local-member-id":"1a7f054d9a9436d0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157947Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157987Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:23.16002Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.31:2379"}
	{"level":"info","ts":"2024-02-13T23:13:23.162012Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:13:23.162088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:23.171293Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:23:23.213068Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-02-13T23:23:23.216527Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.796434ms","hash":4246754320}
	{"level":"info","ts":"2024-02-13T23:23:23.216585Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4246754320,"revision":714,"compact-revision":-1}
	
	
	==> kernel <==
	 23:27:22 up 19 min,  0 users,  load average: 0.35, 0.46, 0.39
	Linux no-preload-778731 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] <==
	I0213 23:21:25.901952       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:23:24.902946       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:23:24.903419       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0213 23:23:25.904650       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:23:25.904850       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:23:25.904928       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:23:25.904872       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:23:25.905069       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:23:25.906087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:24:25.905078       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:25.905176       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:24:25.905192       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:24:25.906251       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:25.906397       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:24:25.906443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:26:25.905625       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:25.906003       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:26:25.906093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:26:25.906833       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:25.906939       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:26:25.907118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] <==
	I0213 23:21:40.870148       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:22:10.376459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:22:10.879415       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:22:40.384041       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:22:40.888326       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:10.391953       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:10.898255       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:40.398195       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:40.908417       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:10.405788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:10.917762       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:40.411950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:40.928846       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0213 23:24:51.885014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="120.037µs"
	I0213 23:25:05.886387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.403µs"
	E0213 23:25:10.418765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:10.936727       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:25:40.424403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:40.947164       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:10.430525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:10.958383       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:40.437614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:40.967762       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:10.443934       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:10.977893       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] <==
	I0213 23:13:44.406158       1 server_others.go:72] "Using iptables proxy"
	I0213 23:13:44.424881       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.83.31"]
	I0213 23:13:44.543165       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0213 23:13:44.543258       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 23:13:44.543290       1 server_others.go:168] "Using iptables Proxier"
	I0213 23:13:44.552756       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:13:44.553029       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0213 23:13:44.553079       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:13:44.554400       1 config.go:188] "Starting service config controller"
	I0213 23:13:44.554461       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:13:44.554504       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:13:44.554521       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:13:44.559201       1 config.go:315] "Starting node config controller"
	I0213 23:13:44.559448       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:13:44.655000       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:13:44.655131       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:13:44.661331       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] <==
	W0213 23:13:25.816569       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 23:13:25.816627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:13:25.853391       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:13:25.853459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 23:13:25.861413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:13:25.861470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:13:25.903772       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:13:25.903839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 23:13:25.929051       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:13:25.929160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:13:26.065142       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:26.065283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:26.088108       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:26.088206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:26.088452       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:13:26.088589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:13:26.143971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:26.144094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:26.282998       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:13:26.283110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:13:26.365334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:13:26.365416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 23:13:26.441149       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:13:26.441208       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0213 23:13:28.211800       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:08:00 UTC, ends at Tue 2024-02-13 23:27:23 UTC. --
	Feb 13 23:24:28 no-preload-778731 kubelet[4323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:24:38 no-preload-778731 kubelet[4323]: E0213 23:24:38.878841    4323 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 13 23:24:38 no-preload-778731 kubelet[4323]: E0213 23:24:38.878887    4323 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 13 23:24:38 no-preload-778731 kubelet[4323]: E0213 23:24:38.879119    4323 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hchnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-mt6qd_kube-system(9726753d-b785-48dc-81d7-86a787851927): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:24:38 no-preload-778731 kubelet[4323]: E0213 23:24:38.879166    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:24:51 no-preload-778731 kubelet[4323]: E0213 23:24:51.867735    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:25:05 no-preload-778731 kubelet[4323]: E0213 23:25:05.867234    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:25:19 no-preload-778731 kubelet[4323]: E0213 23:25:19.868857    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:25:28 no-preload-778731 kubelet[4323]: E0213 23:25:28.934997    4323 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:25:28 no-preload-778731 kubelet[4323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:25:28 no-preload-778731 kubelet[4323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:25:28 no-preload-778731 kubelet[4323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:25:31 no-preload-778731 kubelet[4323]: E0213 23:25:31.868216    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:25:46 no-preload-778731 kubelet[4323]: E0213 23:25:46.872423    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:00 no-preload-778731 kubelet[4323]: E0213 23:26:00.868998    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:12 no-preload-778731 kubelet[4323]: E0213 23:26:12.867434    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:23 no-preload-778731 kubelet[4323]: E0213 23:26:23.867869    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]: E0213 23:26:28.931966    4323 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:26:37 no-preload-778731 kubelet[4323]: E0213 23:26:37.867293    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:51 no-preload-778731 kubelet[4323]: E0213 23:26:51.867594    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:03 no-preload-778731 kubelet[4323]: E0213 23:27:03.867857    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:14 no-preload-778731 kubelet[4323]: E0213 23:27:14.867869    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	
	
	==> storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] <==
	I0213 23:13:44.945554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:13:44.968933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:13:44.970209       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:13:45.009329       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:13:45.010789       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d1e291c-d674-40de-b9a3-332e6609a44e", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-778731_0721ce8d-b48f-4bfa-b7c5-5083713ebd84 became leader
	I0213 23:13:45.011115       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-778731_0721ce8d-b48f-4bfa-b7c5-5083713ebd84!
	I0213 23:13:45.112155       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-778731_0721ce8d-b48f-4bfa-b7c5-5083713ebd84!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778731 -n no-preload-778731
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-778731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mt6qd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-778731 describe pod metrics-server-57f55c9bc5-mt6qd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-778731 describe pod metrics-server-57f55c9bc5-mt6qd: exit status 1 (65.739554ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mt6qd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-778731 describe pod metrics-server-57f55c9bc5-mt6qd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0213 23:19:11.137603   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 23:19:21.414212   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:27:50.552224271 +0000 UTC m=+5490.166998180
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-083863 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-083863 logs -n 25: (1.780915903s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 23:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-245122        | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:05:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:05:02.640377   49715 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:05:02.640501   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640509   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:05:02.640513   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640736   49715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:05:02.641321   49715 out.go:298] Setting JSON to false
	I0213 23:05:02.642273   49715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6454,"bootTime":1707859049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:05:02.642347   49715 start.go:138] virtualization: kvm guest
	I0213 23:05:02.645098   49715 out.go:177] * [default-k8s-diff-port-083863] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:05:02.646964   49715 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:05:02.646970   49715 notify.go:220] Checking for updates...
	I0213 23:05:02.648511   49715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:05:02.650105   49715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:05:02.651715   49715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:05:02.653359   49715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:05:02.655095   49715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:05:02.657048   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:05:02.657426   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.657495   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.672324   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0213 23:05:02.672730   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.673260   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.673290   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.673647   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.673817   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.674096   49715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:05:02.674432   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.674472   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.688915   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0213 23:05:02.689349   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.689790   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.689816   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.690223   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.690421   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.727324   49715 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 23:05:02.728797   49715 start.go:298] selected driver: kvm2
	I0213 23:05:02.728815   49715 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.728927   49715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:05:02.729600   49715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.729674   49715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:05:02.745692   49715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:05:02.746106   49715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:05:02.746172   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:05:02.746187   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:05:02.746199   49715 start_flags.go:321] config:
	{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-08386
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.746779   49715 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.748860   49715 out.go:177] * Starting control plane node default-k8s-diff-port-083863 in cluster default-k8s-diff-port-083863
	I0213 23:05:02.750290   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:05:02.750326   49715 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:05:02.750333   49715 cache.go:56] Caching tarball of preloaded images
	I0213 23:05:02.750421   49715 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:05:02.750463   49715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:05:02.750576   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:05:02.750762   49715 start.go:365] acquiring machines lock for default-k8s-diff-port-083863: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:05:07.158187   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:10.230150   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:16.310133   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:19.382235   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:25.462139   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:28.534229   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:34.614137   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:37.686165   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:43.766206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:46.838168   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:52.918134   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:55.990211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:02.070192   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:05.142167   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:11.222152   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:14.294088   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:20.374194   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:23.446217   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:29.526175   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:32.598147   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:38.678146   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:41.750169   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:47.830142   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:50.902206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:56.982180   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:00.054195   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:06.134182   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:09.206215   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:15.286248   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:18.358211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:24.438162   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:27.510191   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:33.590177   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:36.662174   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:42.742237   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:45.814203   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:48.818472   49120 start.go:369] acquired machines lock for "no-preload-778731" in 4m31.005837415s
	I0213 23:07:48.818529   49120 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:07:48.818538   49120 fix.go:54] fixHost starting: 
	I0213 23:07:48.818916   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:07:48.818948   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:07:48.833483   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 23:07:48.833925   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:07:48.834425   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:07:48.834452   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:07:48.834778   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:07:48.835000   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:07:48.835155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:07:48.836889   49120 fix.go:102] recreateIfNeeded on no-preload-778731: state=Stopped err=<nil>
	I0213 23:07:48.836930   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	W0213 23:07:48.837148   49120 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:07:48.840033   49120 out.go:177] * Restarting existing kvm2 VM for "no-preload-778731" ...
	I0213 23:07:48.816416   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:07:48.816456   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:07:48.818324   49036 machine.go:91] provisioned docker machine in 4m37.408860809s
	I0213 23:07:48.818361   49036 fix.go:56] fixHost completed within 4m37.431023423s
	I0213 23:07:48.818366   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 4m37.431037395s
	W0213 23:07:48.818389   49036 start.go:694] error starting host: provision: host is not running
	W0213 23:07:48.818527   49036 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0213 23:07:48.818541   49036 start.go:709] Will try again in 5 seconds ...
	I0213 23:07:48.841324   49120 main.go:141] libmachine: (no-preload-778731) Calling .Start
	I0213 23:07:48.841532   49120 main.go:141] libmachine: (no-preload-778731) Ensuring networks are active...
	I0213 23:07:48.842327   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network default is active
	I0213 23:07:48.842678   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network mk-no-preload-778731 is active
	I0213 23:07:48.843032   49120 main.go:141] libmachine: (no-preload-778731) Getting domain xml...
	I0213 23:07:48.843852   49120 main.go:141] libmachine: (no-preload-778731) Creating domain...
	I0213 23:07:50.042665   49120 main.go:141] libmachine: (no-preload-778731) Waiting to get IP...
	I0213 23:07:50.043679   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.044091   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.044189   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.044069   50144 retry.go:31] will retry after 251.949505ms: waiting for machine to come up
	I0213 23:07:50.297817   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.298535   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.298567   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.298493   50144 retry.go:31] will retry after 319.494876ms: waiting for machine to come up
	I0213 23:07:50.620050   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.620443   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.620468   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.620395   50144 retry.go:31] will retry after 308.031117ms: waiting for machine to come up
	I0213 23:07:50.929942   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.930361   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.930391   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.930309   50144 retry.go:31] will retry after 513.800078ms: waiting for machine to come up
	I0213 23:07:51.446223   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:51.446875   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:51.446904   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:51.446813   50144 retry.go:31] will retry after 592.80917ms: waiting for machine to come up
	I0213 23:07:52.042126   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.042542   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.042573   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.042519   50144 retry.go:31] will retry after 688.102963ms: waiting for machine to come up
	I0213 23:07:53.818751   49036 start.go:365] acquiring machines lock for old-k8s-version-245122: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:07:52.732194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.732576   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.732602   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.732538   50144 retry.go:31] will retry after 1.143041451s: waiting for machine to come up
	I0213 23:07:53.877287   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:53.877661   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:53.877687   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:53.877624   50144 retry.go:31] will retry after 918.528315ms: waiting for machine to come up
	I0213 23:07:54.797760   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:54.798287   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:54.798314   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:54.798252   50144 retry.go:31] will retry after 1.679404533s: waiting for machine to come up
	I0213 23:07:56.479283   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:56.479853   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:56.479880   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:56.479785   50144 retry.go:31] will retry after 1.510596076s: waiting for machine to come up
	I0213 23:07:57.992757   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:57.993320   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:57.993352   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:57.993274   50144 retry.go:31] will retry after 2.041602638s: waiting for machine to come up
	I0213 23:08:00.036654   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:00.037130   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:00.037162   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:00.037075   50144 retry.go:31] will retry after 3.403460211s: waiting for machine to come up
	I0213 23:08:03.444689   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:03.445152   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:03.445176   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:03.445088   50144 retry.go:31] will retry after 4.270182412s: waiting for machine to come up
	I0213 23:08:09.107106   49443 start.go:369] acquired machines lock for "embed-certs-340656" in 3m54.456203319s
	I0213 23:08:09.107175   49443 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:09.107194   49443 fix.go:54] fixHost starting: 
	I0213 23:08:09.107647   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:09.107696   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:09.124314   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0213 23:08:09.124675   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:09.125131   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:08:09.125153   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:09.125509   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:09.125705   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:09.125898   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:08:09.127641   49443 fix.go:102] recreateIfNeeded on embed-certs-340656: state=Stopped err=<nil>
	I0213 23:08:09.127661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	W0213 23:08:09.127830   49443 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:09.130334   49443 out.go:177] * Restarting existing kvm2 VM for "embed-certs-340656" ...
	I0213 23:08:09.132354   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Start
	I0213 23:08:09.132546   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring networks are active...
	I0213 23:08:09.133391   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network default is active
	I0213 23:08:09.133758   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network mk-embed-certs-340656 is active
	I0213 23:08:09.134160   49443 main.go:141] libmachine: (embed-certs-340656) Getting domain xml...
	I0213 23:08:09.134954   49443 main.go:141] libmachine: (embed-certs-340656) Creating domain...
	I0213 23:08:07.719971   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.720520   49120 main.go:141] libmachine: (no-preload-778731) Found IP for machine: 192.168.83.31
	I0213 23:08:07.720541   49120 main.go:141] libmachine: (no-preload-778731) Reserving static IP address...
	I0213 23:08:07.720559   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has current primary IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.721043   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.721071   49120 main.go:141] libmachine: (no-preload-778731) DBG | skip adding static IP to network mk-no-preload-778731 - found existing host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"}
	I0213 23:08:07.721086   49120 main.go:141] libmachine: (no-preload-778731) Reserved static IP address: 192.168.83.31
	I0213 23:08:07.721105   49120 main.go:141] libmachine: (no-preload-778731) DBG | Getting to WaitForSSH function...
	I0213 23:08:07.721120   49120 main.go:141] libmachine: (no-preload-778731) Waiting for SSH to be available...
	I0213 23:08:07.723769   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724343   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.724370   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724485   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH client type: external
	I0213 23:08:07.724515   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa (-rw-------)
	I0213 23:08:07.724552   49120 main.go:141] libmachine: (no-preload-778731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:07.724577   49120 main.go:141] libmachine: (no-preload-778731) DBG | About to run SSH command:
	I0213 23:08:07.724605   49120 main.go:141] libmachine: (no-preload-778731) DBG | exit 0
	I0213 23:08:07.823050   49120 main.go:141] libmachine: (no-preload-778731) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:07.823504   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetConfigRaw
	I0213 23:08:07.824155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:07.826730   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827237   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.827277   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827608   49120 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/config.json ...
	I0213 23:08:07.827851   49120 machine.go:88] provisioning docker machine ...
	I0213 23:08:07.827877   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:07.828112   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828416   49120 buildroot.go:166] provisioning hostname "no-preload-778731"
	I0213 23:08:07.828464   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828745   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.832015   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832438   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.832477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832698   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.832929   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833125   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833288   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.833480   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.833828   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.833845   49120 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-778731 && echo "no-preload-778731" | sudo tee /etc/hostname
	I0213 23:08:07.979041   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-778731
	
	I0213 23:08:07.979079   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.982378   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982755   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.982783   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982982   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.983137   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983346   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983462   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.983600   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.983946   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.983967   49120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778731/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:08.122610   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:08.122641   49120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:08.122657   49120 buildroot.go:174] setting up certificates
	I0213 23:08:08.122666   49120 provision.go:83] configureAuth start
	I0213 23:08:08.122674   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:08.122935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:08.125641   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126016   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.126046   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126205   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.128441   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128736   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.128780   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128918   49120 provision.go:138] copyHostCerts
	I0213 23:08:08.128984   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:08.128997   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:08.129067   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:08.129198   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:08.129211   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:08.129248   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:08.129321   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:08.129335   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:08.129373   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:08.129443   49120 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.no-preload-778731 san=[192.168.83.31 192.168.83.31 localhost 127.0.0.1 minikube no-preload-778731]
	I0213 23:08:08.326156   49120 provision.go:172] copyRemoteCerts
	I0213 23:08:08.326234   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:08.326263   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.329373   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.329952   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.329986   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.330257   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.330447   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.330599   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.330737   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.423570   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:08.447689   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:08.472766   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:08:08.496594   49120 provision.go:86] duration metric: configureAuth took 373.917105ms
	I0213 23:08:08.496623   49120 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:08.496815   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:08:08.496899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.499464   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499771   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.499801   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.500116   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500284   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500459   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.500651   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.500962   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.500981   49120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:08.828899   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:08.828935   49120 machine.go:91] provisioned docker machine in 1.001067662s
	I0213 23:08:08.828948   49120 start.go:300] post-start starting for "no-preload-778731" (driver="kvm2")
	I0213 23:08:08.828966   49120 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:08.828987   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:08.829378   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:08.829401   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.831985   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832340   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.832365   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832498   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.832717   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.832873   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.833022   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.930192   49120 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:08.934633   49120 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:08.934660   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:08.934723   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:08.934804   49120 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:08.934893   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:08.945400   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:08.973850   49120 start.go:303] post-start completed in 144.888108ms
	I0213 23:08:08.973894   49120 fix.go:56] fixHost completed within 20.155355472s
	I0213 23:08:08.973917   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.976477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976799   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.976831   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976990   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.977177   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977358   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977513   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.977664   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.978069   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.978082   49120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:09.106952   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865689.053803664
	
	I0213 23:08:09.106977   49120 fix.go:206] guest clock: 1707865689.053803664
	I0213 23:08:09.106984   49120 fix.go:219] Guest: 2024-02-13 23:08:09.053803664 +0000 UTC Remote: 2024-02-13 23:08:08.973898202 +0000 UTC m=+291.312557253 (delta=79.905462ms)
	I0213 23:08:09.107004   49120 fix.go:190] guest clock delta is within tolerance: 79.905462ms
	I0213 23:08:09.107011   49120 start.go:83] releasing machines lock for "no-preload-778731", held for 20.288505954s
	I0213 23:08:09.107046   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.107372   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:09.110226   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110592   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.110623   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110795   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111368   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111531   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111622   49120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:09.111662   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.113712   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.114053   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.114096   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.117964   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.118031   49120 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:09.118065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.118167   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.118318   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.118615   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.120610   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121054   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.121088   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121290   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.121461   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.121627   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.121770   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.234065   49120 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:09.240751   49120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:09.393966   49120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:09.401672   49120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:09.401767   49120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:09.426073   49120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:09.426099   49120 start.go:475] detecting cgroup driver to use...
	I0213 23:08:09.426172   49120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:09.446114   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:09.461330   49120 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:09.461404   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:09.475964   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:09.490801   49120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:09.621898   49120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:09.747413   49120 docker.go:233] disabling docker service ...
	I0213 23:08:09.747470   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:09.766642   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:09.783116   49120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:09.910634   49120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:10.052181   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:10.066413   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:10.089436   49120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:10.089505   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.100366   49120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:10.100453   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.111681   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.122231   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.132945   49120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:10.146287   49120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:10.156405   49120 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:10.156481   49120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:10.172152   49120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:10.182862   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:10.315633   49120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:10.509774   49120 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:10.509878   49120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:10.514924   49120 start.go:543] Will wait 60s for crictl version
	I0213 23:08:10.515016   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.518898   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:10.558596   49120 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:10.558695   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.611876   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.664604   49120 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:08:10.665908   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:10.669029   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669393   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:10.669442   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669676   49120 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:10.673975   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:10.686760   49120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:08:10.686830   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:10.730784   49120 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:08:10.730813   49120 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:08:10.730900   49120 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.730903   49120 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.730909   49120 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.730914   49120 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.731026   49120 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.731083   49120 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.731131   49120 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.731497   49120 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732506   49120 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.732511   49120 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.732513   49120 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.732543   49120 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732577   49120 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.732597   49120 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.732719   49120 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.732759   49120 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.880038   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.891830   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0213 23:08:10.905668   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.930079   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.940850   49120 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0213 23:08:10.940894   49120 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.940941   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.942664   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.985299   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.011467   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.040720   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.099497   49120 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0213 23:08:11.099544   49120 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0213 23:08:11.099577   49120 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.099614   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:11.099636   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099651   49120 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0213 23:08:11.099683   49120 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.099711   49120 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0213 23:08:11.099740   49120 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.099746   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099760   49120 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0213 23:08:11.099767   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099782   49120 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.099547   49120 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.099901   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099916   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.107567   49120 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0213 23:08:11.107614   49120 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.107675   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.119038   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.157701   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.157799   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.157722   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.157768   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.157830   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.157919   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0213 23:08:11.158002   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.200990   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0213 23:08:11.201117   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:11.299985   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.300039   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 23:08:11.300041   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300130   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:11.300137   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300148   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0213 23:08:11.300163   49120 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300198   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300209   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300216   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0213 23:08:11.300203   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300098   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300293   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300096   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.318252   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0213 23:08:11.318307   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318355   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318520   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0213 23:08:10.406170   49443 main.go:141] libmachine: (embed-certs-340656) Waiting to get IP...
	I0213 23:08:10.407139   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.407616   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.407692   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.407598   50262 retry.go:31] will retry after 193.299479ms: waiting for machine to come up
	I0213 23:08:10.603143   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.603673   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.603696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.603627   50262 retry.go:31] will retry after 369.099644ms: waiting for machine to come up
	I0213 23:08:10.974421   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.974922   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.974953   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.974870   50262 retry.go:31] will retry after 418.956642ms: waiting for machine to come up
	I0213 23:08:11.395489   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:11.395974   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:11.396005   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:11.395937   50262 retry.go:31] will retry after 610.320769ms: waiting for machine to come up
	I0213 23:08:12.007695   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.008167   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.008198   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.008115   50262 retry.go:31] will retry after 624.461953ms: waiting for machine to come up
	I0213 23:08:12.634088   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.634519   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.634552   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.634467   50262 retry.go:31] will retry after 903.217503ms: waiting for machine to come up
	I0213 23:08:13.539114   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:13.539683   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:13.539725   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:13.539611   50262 retry.go:31] will retry after 747.647967ms: waiting for machine to come up
	I0213 23:08:14.288632   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:14.289301   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:14.289338   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:14.289236   50262 retry.go:31] will retry after 1.415080779s: waiting for machine to come up
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.810648669s)
	I0213 23:08:15.110937   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.810587707s)
	I0213 23:08:15.110961   49120 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:15.110969   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0213 23:08:15.111009   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:17.178104   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067071549s)
	I0213 23:08:17.178130   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0213 23:08:17.178156   49120 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:17.178204   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:15.706329   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:15.706863   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:15.706901   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:15.706769   50262 retry.go:31] will retry after 1.500671136s: waiting for machine to come up
	I0213 23:08:17.209706   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:17.210252   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:17.210278   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:17.210198   50262 retry.go:31] will retry after 1.743342291s: waiting for machine to come up
	I0213 23:08:18.956397   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:18.956934   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:18.956971   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:18.956874   50262 retry.go:31] will retry after 2.095777111s: waiting for machine to come up
	I0213 23:08:18.227625   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.049388261s)
	I0213 23:08:18.227663   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 23:08:18.227691   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:18.227756   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:21.120783   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.892997016s)
	I0213 23:08:21.120823   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0213 23:08:21.120854   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.120908   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.055630   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:21.056028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:21.056106   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:21.056004   50262 retry.go:31] will retry after 3.144708692s: waiting for machine to come up
	I0213 23:08:24.202158   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:24.202562   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:24.202584   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:24.202515   50262 retry.go:31] will retry after 3.072407019s: waiting for machine to come up
	I0213 23:08:23.793772   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.672817599s)
	I0213 23:08:23.793813   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0213 23:08:23.793841   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:23.793916   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:25.866352   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.072399119s)
	I0213 23:08:25.866388   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0213 23:08:25.866422   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:25.866469   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:27.315469   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.44897051s)
	I0213 23:08:27.315502   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0213 23:08:27.315534   49120 cache_images.go:123] Successfully loaded all cached images
	I0213 23:08:27.315540   49120 cache_images.go:92] LoadImages completed in 16.584715329s
	I0213 23:08:27.315650   49120 ssh_runner.go:195] Run: crio config
	I0213 23:08:27.383180   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:27.383203   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:27.383224   49120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:27.383249   49120 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.31 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778731 NodeName:no-preload-778731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:27.383445   49120 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778731"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:27.383545   49120 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-778731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:27.383606   49120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:08:27.393312   49120 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:27.393384   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:27.401513   49120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0213 23:08:27.419705   49120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:08:27.439236   49120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0213 23:08:27.457026   49120 ssh_runner.go:195] Run: grep 192.168.83.31	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:27.461679   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:27.474701   49120 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731 for IP: 192.168.83.31
	I0213 23:08:27.474740   49120 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:27.474922   49120 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:27.474966   49120 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:27.475042   49120 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.key
	I0213 23:08:27.475102   49120 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key.049c2370
	I0213 23:08:27.475138   49120 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key
	I0213 23:08:27.475241   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:27.475271   49120 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:27.475281   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:27.475305   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:27.475326   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:27.475360   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:27.475401   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:27.475997   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:27.500212   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:27.526078   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:27.552892   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:27.579169   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:27.603962   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:27.628862   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:27.653046   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:27.681039   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:27.708026   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:28.658782   49715 start.go:369] acquired machines lock for "default-k8s-diff-port-083863" in 3m25.907988779s
	I0213 23:08:28.658844   49715 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:28.658851   49715 fix.go:54] fixHost starting: 
	I0213 23:08:28.659235   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:28.659276   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:28.677314   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0213 23:08:28.677718   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:28.678315   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:08:28.678355   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:28.678727   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:28.678935   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:28.679109   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:08:28.680868   49715 fix.go:102] recreateIfNeeded on default-k8s-diff-port-083863: state=Stopped err=<nil>
	I0213 23:08:28.680915   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	W0213 23:08:28.681100   49715 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:28.683083   49715 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-083863" ...
	I0213 23:08:27.278610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279033   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has current primary IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279068   49443 main.go:141] libmachine: (embed-certs-340656) Found IP for machine: 192.168.61.56
	I0213 23:08:27.279085   49443 main.go:141] libmachine: (embed-certs-340656) Reserving static IP address...
	I0213 23:08:27.279524   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.279553   49443 main.go:141] libmachine: (embed-certs-340656) Reserved static IP address: 192.168.61.56
	I0213 23:08:27.279572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | skip adding static IP to network mk-embed-certs-340656 - found existing host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"}
	I0213 23:08:27.279592   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Getting to WaitForSSH function...
	I0213 23:08:27.279609   49443 main.go:141] libmachine: (embed-certs-340656) Waiting for SSH to be available...
	I0213 23:08:27.282041   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282383   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.282417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282517   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH client type: external
	I0213 23:08:27.282548   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa (-rw-------)
	I0213 23:08:27.282582   49443 main.go:141] libmachine: (embed-certs-340656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:27.282598   49443 main.go:141] libmachine: (embed-certs-340656) DBG | About to run SSH command:
	I0213 23:08:27.282688   49443 main.go:141] libmachine: (embed-certs-340656) DBG | exit 0
	I0213 23:08:27.374230   49443 main.go:141] libmachine: (embed-certs-340656) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:27.374589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetConfigRaw
	I0213 23:08:27.375330   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.378273   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378648   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.378682   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378917   49443 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/config.json ...
	I0213 23:08:27.379092   49443 machine.go:88] provisioning docker machine ...
	I0213 23:08:27.379109   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:27.379298   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379491   49443 buildroot.go:166] provisioning hostname "embed-certs-340656"
	I0213 23:08:27.379521   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379667   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.382028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382351   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.382404   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382562   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.382728   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.382880   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.383023   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.383213   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.383662   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.383682   49443 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-340656 && echo "embed-certs-340656" | sudo tee /etc/hostname
	I0213 23:08:27.526044   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-340656
	
	I0213 23:08:27.526075   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.529185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529526   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.529556   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529660   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.529852   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530056   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530203   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.530356   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.530695   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.530725   49443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-340656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-340656/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-340656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:27.664926   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:27.664966   49443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:27.664993   49443 buildroot.go:174] setting up certificates
	I0213 23:08:27.665004   49443 provision.go:83] configureAuth start
	I0213 23:08:27.665019   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.665429   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.668520   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.668912   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.668937   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.669172   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.671996   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672365   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.672411   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672620   49443 provision.go:138] copyHostCerts
	I0213 23:08:27.672684   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:27.672706   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:27.672778   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:27.672914   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:27.672929   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:27.672966   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:27.673049   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:27.673060   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:27.673089   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:27.673187   49443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.embed-certs-340656 san=[192.168.61.56 192.168.61.56 localhost 127.0.0.1 minikube embed-certs-340656]
	I0213 23:08:27.924954   49443 provision.go:172] copyRemoteCerts
	I0213 23:08:27.925011   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:27.925033   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.928037   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928376   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.928410   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928588   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.928779   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.928960   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.929085   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.019335   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:28.043949   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0213 23:08:28.066824   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:08:28.089010   49443 provision.go:86] duration metric: configureAuth took 423.986916ms
	I0213 23:08:28.089043   49443 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:28.089251   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:28.089316   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.091655   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.091955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.091984   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.092151   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.092310   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092440   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092553   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.092694   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.092999   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.093014   49443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:28.402931   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:28.402953   49443 machine.go:91] provisioned docker machine in 1.023849221s
	I0213 23:08:28.402962   49443 start.go:300] post-start starting for "embed-certs-340656" (driver="kvm2")
	I0213 23:08:28.402972   49443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:28.402986   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.403246   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:28.403266   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.405815   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.406201   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406331   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.406514   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.406703   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.406867   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.500638   49443 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:28.504820   49443 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:28.504839   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:28.504899   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:28.504967   49443 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:28.505051   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:28.514593   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:28.536607   49443 start.go:303] post-start completed in 133.632311ms
	I0213 23:08:28.536653   49443 fix.go:56] fixHost completed within 19.429451259s
	I0213 23:08:28.536673   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.539355   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539715   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.539739   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539914   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.540115   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540275   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540420   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.540581   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.540917   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.540932   49443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:28.658649   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865708.631208852
	
	I0213 23:08:28.658674   49443 fix.go:206] guest clock: 1707865708.631208852
	I0213 23:08:28.658682   49443 fix.go:219] Guest: 2024-02-13 23:08:28.631208852 +0000 UTC Remote: 2024-02-13 23:08:28.536657964 +0000 UTC m=+254.042699377 (delta=94.550888ms)
	I0213 23:08:28.658701   49443 fix.go:190] guest clock delta is within tolerance: 94.550888ms
	I0213 23:08:28.658707   49443 start.go:83] releasing machines lock for "embed-certs-340656", held for 19.551560323s
	I0213 23:08:28.658730   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.658982   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:28.662069   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662449   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.662480   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662651   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663245   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663430   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663521   49443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:28.663567   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.663688   49443 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:28.663712   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.666417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666867   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.666900   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667039   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.667185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667234   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667418   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667467   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667518   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.667589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667736   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.782794   49443 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:28.788743   49443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:28.933478   49443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:28.940543   49443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:28.940632   49443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:28.958972   49443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:28.958994   49443 start.go:475] detecting cgroup driver to use...
	I0213 23:08:28.959084   49443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:28.977833   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:28.996142   49443 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:28.996205   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:29.015509   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:29.029839   49443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:29.140405   49443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:29.265524   49443 docker.go:233] disabling docker service ...
	I0213 23:08:29.265597   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:29.283479   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:29.300116   49443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:29.428731   49443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:29.555072   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:29.569803   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:29.589259   49443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:29.589329   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.600653   49443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:29.600732   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.612313   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.624637   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.636279   49443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:29.648496   49443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:29.658957   49443 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:29.659020   49443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:29.673605   49443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:29.684589   49443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:29.800899   49443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:29.989345   49443 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:29.989423   49443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:29.995420   49443 start.go:543] Will wait 60s for crictl version
	I0213 23:08:29.995489   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:08:30.000012   49443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:30.047026   49443 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:30.047114   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.095456   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.146027   49443 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:28.684576   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Start
	I0213 23:08:28.684757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring networks are active...
	I0213 23:08:28.685582   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network default is active
	I0213 23:08:28.685942   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network mk-default-k8s-diff-port-083863 is active
	I0213 23:08:28.686429   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Getting domain xml...
	I0213 23:08:28.687208   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Creating domain...
	I0213 23:08:30.003148   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting to get IP...
	I0213 23:08:30.004175   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004634   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004725   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.004599   50394 retry.go:31] will retry after 210.109414ms: waiting for machine to come up
	I0213 23:08:30.215983   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216407   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216439   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.216359   50394 retry.go:31] will retry after 367.743906ms: waiting for machine to come up
	I0213 23:08:30.586081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586629   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586663   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.586583   50394 retry.go:31] will retry after 342.736609ms: waiting for machine to come up
	I0213 23:08:30.931248   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931707   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931738   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.931656   50394 retry.go:31] will retry after 597.326691ms: waiting for machine to come up
	I0213 23:08:31.530395   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530818   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530848   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:31.530767   50394 retry.go:31] will retry after 749.518323ms: waiting for machine to come up
	I0213 23:08:32.281688   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282102   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282138   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:32.282052   50394 retry.go:31] will retry after 760.722423ms: waiting for machine to come up
	I0213 23:08:27.731687   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:27.755515   49120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:27.774677   49120 ssh_runner.go:195] Run: openssl version
	I0213 23:08:27.780042   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:27.789684   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794384   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794443   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.800052   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:27.809570   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:27.818781   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823148   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823241   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.829043   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:27.839290   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:27.849614   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854661   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854720   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.860365   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:27.870548   49120 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:27.874967   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:27.880745   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:27.886409   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:27.892063   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:27.897857   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:27.903804   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:27.909720   49120 kubeadm.go:404] StartCluster: {Name:no-preload-778731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:27.909833   49120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:27.909924   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:27.951061   49120 cri.go:89] found id: ""
	I0213 23:08:27.951158   49120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:27.961916   49120 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:27.961941   49120 kubeadm.go:636] restartCluster start
	I0213 23:08:27.961993   49120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:27.971549   49120 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:27.972633   49120 kubeconfig.go:92] found "no-preload-778731" server: "https://192.168.83.31:8443"
	I0213 23:08:27.975092   49120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:27.983592   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:27.983650   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:27.993448   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.483988   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.484086   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.499804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.984581   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.984671   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.995887   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.484572   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.484680   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.496906   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.984503   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.984569   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.997813   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.484312   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.484391   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.501606   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.984144   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.984237   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.999418   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.483900   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.483977   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.498536   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.983688   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.983783   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.998804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:32.484556   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.484662   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:32.499238   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.147474   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:30.150438   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.150826   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:30.150857   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.151054   49443 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:30.155517   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:30.168463   49443 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:30.168543   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:30.210212   49443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:30.210296   49443 ssh_runner.go:195] Run: which lz4
	I0213 23:08:30.214665   49443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:30.219355   49443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:30.219383   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:32.244671   49443 crio.go:444] Took 2.030037 seconds to copy over tarball
	I0213 23:08:32.244757   49443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:33.043974   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044478   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:33.044417   50394 retry.go:31] will retry after 1.030870704s: waiting for machine to come up
	I0213 23:08:34.077209   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077662   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077692   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:34.077625   50394 retry.go:31] will retry after 1.450536952s: waiting for machine to come up
	I0213 23:08:35.529659   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530101   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530135   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:35.530042   50394 retry.go:31] will retry after 1.82898716s: waiting for machine to come up
	I0213 23:08:37.360889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361314   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361343   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:37.361270   50394 retry.go:31] will retry after 1.83473409s: waiting for machine to come up
	I0213 23:08:32.984096   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.984203   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.001189   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.483705   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.499694   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.983927   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.984057   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.999205   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.483708   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.483798   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.498840   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.984372   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.984461   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.999079   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.483661   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.497573   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.983985   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.984088   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.995899   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.484546   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.484660   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.496286   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.983902   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.984113   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.995778   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.484405   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.484518   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.495219   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.549721   49443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.304931423s)
	I0213 23:08:35.549748   49443 crio.go:451] Took 3.305051 seconds to extract the tarball
	I0213 23:08:35.549778   49443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:35.590195   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:35.640735   49443 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:35.640768   49443 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:35.640850   49443 ssh_runner.go:195] Run: crio config
	I0213 23:08:35.707018   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:35.707046   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:35.707072   49443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:35.707117   49443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.56 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-340656 NodeName:embed-certs-340656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:35.707294   49443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-340656"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:35.707405   49443 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-340656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:35.707483   49443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:35.717170   49443 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:35.717251   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:35.726586   49443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0213 23:08:35.744139   49443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:35.761480   49443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0213 23:08:35.779911   49443 ssh_runner.go:195] Run: grep 192.168.61.56	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:35.784152   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:35.799376   49443 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656 for IP: 192.168.61.56
	I0213 23:08:35.799417   49443 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:35.799601   49443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:35.799657   49443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:35.799766   49443 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/client.key
	I0213 23:08:35.799859   49443 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key.aef5f426
	I0213 23:08:35.799913   49443 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key
	I0213 23:08:35.800053   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:35.800091   49443 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:35.800107   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:35.800143   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:35.800180   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:35.800215   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:35.800276   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:35.801130   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:35.829983   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:35.856832   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:35.883713   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:35.910759   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:35.937208   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:35.963904   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:35.991562   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:36.022900   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:36.049084   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:36.074152   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:36.098863   49443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:36.115588   49443 ssh_runner.go:195] Run: openssl version
	I0213 23:08:36.120864   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:36.130552   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.134999   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.135068   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.140621   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:36.150963   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:36.160917   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165428   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165472   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.171493   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:36.181635   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:36.191753   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196368   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196444   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.201985   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:36.211839   49443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:36.216608   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:36.222594   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:36.228585   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:36.234646   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:36.240579   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:36.246642   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:36.252961   49443 kubeadm.go:404] StartCluster: {Name:embed-certs-340656 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:36.253087   49443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:36.253149   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:36.297601   49443 cri.go:89] found id: ""
	I0213 23:08:36.297705   49443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:36.308068   49443 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:36.308094   49443 kubeadm.go:636] restartCluster start
	I0213 23:08:36.308152   49443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:36.318071   49443 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.319274   49443 kubeconfig.go:92] found "embed-certs-340656" server: "https://192.168.61.56:8443"
	I0213 23:08:36.321573   49443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:36.331006   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.331059   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.342313   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.831994   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.832106   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.845071   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.331654   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.331724   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.344311   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.831903   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.831999   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.843671   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.331225   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.331337   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.349021   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.831196   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.831292   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.847050   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.332089   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.332162   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.348108   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.198188   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198570   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198596   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:39.198528   50394 retry.go:31] will retry after 2.722095348s: waiting for machine to come up
	I0213 23:08:41.923545   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923954   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923985   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:41.923904   50394 retry.go:31] will retry after 2.239772531s: waiting for machine to come up
	I0213 23:08:37.984640   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.984743   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.999300   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.999332   49120 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:37.999340   49120 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:37.999349   49120 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:37.999402   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:38.046199   49120 cri.go:89] found id: ""
	I0213 23:08:38.046287   49120 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:38.061697   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:38.071295   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:38.071378   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080401   49120 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:38.209853   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.403696   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193792627s)
	I0213 23:08:39.403733   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.602387   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.703317   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.783257   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:39.783347   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.284357   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.784437   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.284302   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.783582   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.284435   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.312653   49120 api_server.go:72] duration metric: took 2.529396171s to wait for apiserver process to appear ...
	I0213 23:08:42.312698   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:42.312719   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:42.313220   49120 api_server.go:269] stopped: https://192.168.83.31:8443/healthz: Get "https://192.168.83.31:8443/healthz": dial tcp 192.168.83.31:8443: connect: connection refused
	I0213 23:08:39.832020   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.832156   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.848229   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.331855   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.331992   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.347635   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.831070   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.831185   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.847184   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.331346   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.331444   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.346518   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.831081   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.831160   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.846752   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.331298   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.331389   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.348782   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.831278   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.831373   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.846241   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.331807   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.331876   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.346998   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.831697   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.831792   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.843733   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.331647   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.331762   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.343476   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.165021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165387   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:44.165357   50394 retry.go:31] will retry after 2.886798605s: waiting for machine to come up
	I0213 23:08:47.055186   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055880   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Found IP for machine: 192.168.39.3
	I0213 23:08:47.055923   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has current primary IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserving static IP address...
	I0213 23:08:47.056480   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.056512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserved static IP address: 192.168.39.3
	I0213 23:08:47.056537   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | skip adding static IP to network mk-default-k8s-diff-port-083863 - found existing host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"}
	I0213 23:08:47.056552   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Getting to WaitForSSH function...
	I0213 23:08:47.056567   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for SSH to be available...
	I0213 23:08:47.059414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059844   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.059882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059991   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH client type: external
	I0213 23:08:47.060025   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa (-rw-------)
	I0213 23:08:47.060061   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:47.060077   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | About to run SSH command:
	I0213 23:08:47.060093   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | exit 0
	I0213 23:08:47.154417   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:47.154807   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetConfigRaw
	I0213 23:08:47.155614   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.158506   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.158979   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.159005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.159297   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:08:47.159557   49715 machine.go:88] provisioning docker machine ...
	I0213 23:08:47.159577   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:47.159833   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160012   49715 buildroot.go:166] provisioning hostname "default-k8s-diff-port-083863"
	I0213 23:08:47.160038   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160240   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.163021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163444   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.163476   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163705   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.163908   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164070   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164234   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.164391   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.164762   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.164777   49715 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-083863 && echo "default-k8s-diff-port-083863" | sudo tee /etc/hostname
	I0213 23:08:47.304583   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-083863
	
	I0213 23:08:47.304617   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.307729   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308160   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.308196   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308345   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.308541   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308713   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308921   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.309148   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.309520   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.309539   49715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-083863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-083863/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-083863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:47.442924   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:47.442958   49715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:47.442989   49715 buildroot.go:174] setting up certificates
	I0213 23:08:47.443006   49715 provision.go:83] configureAuth start
	I0213 23:08:47.443024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.443287   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.446220   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446611   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.446646   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446821   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.449591   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.449920   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.449989   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.450162   49715 provision.go:138] copyHostCerts
	I0213 23:08:47.450221   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:47.450241   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:47.450305   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:47.450482   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:47.450497   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:47.450532   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:47.450614   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:47.450625   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:47.450651   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:47.450720   49715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-083863 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube default-k8s-diff-port-083863]
	I0213 23:08:47.522550   49715 provision.go:172] copyRemoteCerts
	I0213 23:08:47.522618   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:47.522647   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.525731   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526189   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.526230   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526410   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.526610   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.526814   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.526971   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:47.626666   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:42.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.095528   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:46.095564   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:46.095581   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.178470   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.178500   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.313729   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.318658   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.318686   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.813274   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.819766   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.819808   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.313432   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.325228   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:47.325274   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.819686   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:08:47.829842   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:08:47.829896   49120 api_server.go:131] duration metric: took 5.517189469s to wait for apiserver health ...
	I0213 23:08:47.829907   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:47.829915   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:47.831685   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:48.354933   49036 start.go:369] acquired machines lock for "old-k8s-version-245122" in 54.536117689s
	I0213 23:08:48.354988   49036 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:48.354996   49036 fix.go:54] fixHost starting: 
	I0213 23:08:48.355410   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:48.355447   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:48.375953   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0213 23:08:48.376414   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:48.376997   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:08:48.377034   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:48.377373   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:48.377578   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:08:48.377709   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:08:48.379630   49036 fix.go:102] recreateIfNeeded on old-k8s-version-245122: state=Stopped err=<nil>
	I0213 23:08:48.379660   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	W0213 23:08:48.379822   49036 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:48.381473   49036 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-245122" ...
	I0213 23:08:44.831390   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.831503   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.845068   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.331710   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.331800   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.343755   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.831306   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.831415   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.844972   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.331510   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:46.331596   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:46.343475   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.343509   49443 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:46.343520   49443 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:46.343532   49443 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:46.343595   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:46.388343   49443 cri.go:89] found id: ""
	I0213 23:08:46.388417   49443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:46.403792   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:46.413139   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:46.413197   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422541   49443 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422566   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:46.551204   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.427625   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.656205   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.776652   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.860844   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:47.860942   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.362058   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.861851   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:49.361973   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:47.655867   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 23:08:47.687226   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:47.719579   49715 provision.go:86] duration metric: configureAuth took 276.554247ms
	I0213 23:08:47.719610   49715 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:47.719857   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:47.719945   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.723023   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723353   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.723386   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723686   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.723889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724074   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724299   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.724469   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.724860   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.724878   49715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:48.093490   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:48.093519   49715 machine.go:91] provisioned docker machine in 933.948787ms
	I0213 23:08:48.093529   49715 start.go:300] post-start starting for "default-k8s-diff-port-083863" (driver="kvm2")
	I0213 23:08:48.093540   49715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:48.093553   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.093887   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:48.093922   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.096941   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097351   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.097385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097701   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.097936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.098145   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.098367   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.188626   49715 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:48.193282   49715 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:48.193320   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:48.193406   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:48.193500   49715 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:48.193597   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:48.202782   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:48.235000   49715 start.go:303] post-start completed in 141.454861ms
	I0213 23:08:48.235032   49715 fix.go:56] fixHost completed within 19.576181803s
	I0213 23:08:48.235051   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.238450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.238992   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.239024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.239320   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.239535   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239683   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239846   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.240085   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:48.240390   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:48.240401   49715 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:48.354769   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865728.300012904
	
	I0213 23:08:48.354799   49715 fix.go:206] guest clock: 1707865728.300012904
	I0213 23:08:48.354811   49715 fix.go:219] Guest: 2024-02-13 23:08:48.300012904 +0000 UTC Remote: 2024-02-13 23:08:48.235035663 +0000 UTC m=+225.644270499 (delta=64.977241ms)
	I0213 23:08:48.354837   49715 fix.go:190] guest clock delta is within tolerance: 64.977241ms
	I0213 23:08:48.354845   49715 start.go:83] releasing machines lock for "default-k8s-diff-port-083863", held for 19.696026805s
	I0213 23:08:48.354884   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.355246   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:48.358586   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359040   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.359081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359323   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.359961   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360127   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360200   49715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:48.360233   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.360372   49715 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:48.360398   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.363529   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.363715   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364166   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364357   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364394   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364461   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364656   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.364824   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370192   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.370221   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.370404   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370677   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.457230   49715 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:48.484954   49715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:48.636752   49715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:48.644369   49715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:48.644452   49715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:48.667562   49715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:48.667594   49715 start.go:475] detecting cgroup driver to use...
	I0213 23:08:48.667684   49715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:48.689737   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:48.708806   49715 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:48.708876   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:48.728530   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:48.746819   49715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:48.877519   49715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:49.069574   49715 docker.go:233] disabling docker service ...
	I0213 23:08:49.069661   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:49.103853   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:49.122356   49715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:49.272225   49715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:49.412111   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:49.428799   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:49.449679   49715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:49.449734   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.465458   49715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:49.465523   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.480399   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.494161   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.507964   49715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:49.522486   49715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:49.534468   49715 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:49.534538   49715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:49.554260   49715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:49.566868   49715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:49.725125   49715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:49.963096   49715 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:49.963172   49715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:49.970420   49715 start.go:543] Will wait 60s for crictl version
	I0213 23:08:49.970508   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:08:49.976177   49715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:50.024316   49715 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:50.024407   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.080031   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.133918   49715 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:48.382835   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Start
	I0213 23:08:48.383129   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring networks are active...
	I0213 23:08:48.384069   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network default is active
	I0213 23:08:48.384458   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network mk-old-k8s-version-245122 is active
	I0213 23:08:48.385051   49036 main.go:141] libmachine: (old-k8s-version-245122) Getting domain xml...
	I0213 23:08:48.387192   49036 main.go:141] libmachine: (old-k8s-version-245122) Creating domain...
	I0213 23:08:49.933195   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting to get IP...
	I0213 23:08:49.934463   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:49.935084   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:49.935109   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:49.934961   50565 retry.go:31] will retry after 206.578168ms: waiting for machine to come up
	I0213 23:08:50.143704   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.144239   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.144263   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.144177   50565 retry.go:31] will retry after 378.113433ms: waiting for machine to come up
	I0213 23:08:50.524043   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.524670   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.524703   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.524629   50565 retry.go:31] will retry after 468.261692ms: waiting for machine to come up
	I0213 23:08:50.995002   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.995616   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.995645   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.995524   50565 retry.go:31] will retry after 437.792222ms: waiting for machine to come up
	I0213 23:08:50.135427   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:50.139087   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139523   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:50.139556   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139840   49715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:50.145191   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:50.159814   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:50.159873   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:50.208873   49715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:50.208947   49715 ssh_runner.go:195] Run: which lz4
	I0213 23:08:50.214254   49715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:50.219979   49715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:50.220013   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:47.833116   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:47.862550   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:47.895377   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:47.919843   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:47.919894   49120 system_pods.go:61] "coredns-76f75df574-hgzcn" [a384c748-9d5b-4d07-b03c-5a65b3d7a450] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:47.919907   49120 system_pods.go:61] "etcd-no-preload-778731" [44169811-10f1-4d3e-8eaa-b525dd0f722f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:47.919920   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [126febb5-8d0b-4162-b320-7fd718b4a974] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:47.919929   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [a7be9641-1bd0-41f9-853a-73b522c60746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:47.919945   49120 system_pods.go:61] "kube-proxy-msxf7" [81201ce9-6f3d-457c-b582-eb8a17dbf4eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:47.919968   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [72f487c5-c42e-4e42-85c8-3b3df6bccd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:47.919984   49120 system_pods.go:61] "metrics-server-57f55c9bc5-r44rm" [ae0751b9-57fe-4d99-b41c-5c685b846e1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:47.919996   49120 system_pods.go:61] "storage-provisioner" [e1d157b3-7ce1-488c-a3ea-ab0e8da83fb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:47.920009   49120 system_pods.go:74] duration metric: took 24.606913ms to wait for pod list to return data ...
	I0213 23:08:47.920031   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:47.930765   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:47.930810   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:47.930827   49120 node_conditions.go:105] duration metric: took 10.783663ms to run NodePressure ...
	I0213 23:08:47.930848   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:48.401055   49120 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407167   49120 kubeadm.go:787] kubelet initialised
	I0213 23:08:48.407238   49120 kubeadm.go:788] duration metric: took 6.148946ms waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407260   49120 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:48.414170   49120 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:50.427883   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:52.431208   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:49.861114   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.361308   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.861249   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.894694   49443 api_server.go:72] duration metric: took 3.033850926s to wait for apiserver process to appear ...
	I0213 23:08:50.894724   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:50.894746   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:50.895231   49443 api_server.go:269] stopped: https://192.168.61.56:8443/healthz: Get "https://192.168.61.56:8443/healthz": dial tcp 192.168.61.56:8443: connect: connection refused
	I0213 23:08:51.394882   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:51.435131   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:51.435705   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:51.435733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:51.435616   50565 retry.go:31] will retry after 631.237829ms: waiting for machine to come up
	I0213 23:08:52.069120   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.069697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.069719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.069617   50565 retry.go:31] will retry after 756.691364ms: waiting for machine to come up
	I0213 23:08:52.828166   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.828631   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.828662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.828562   50565 retry.go:31] will retry after 761.909065ms: waiting for machine to come up
	I0213 23:08:53.592196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:53.592753   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:53.592779   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:53.592685   50565 retry.go:31] will retry after 1.153412106s: waiting for machine to come up
	I0213 23:08:54.747606   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:54.748184   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:54.748221   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:54.748113   50565 retry.go:31] will retry after 1.198347182s: waiting for machine to come up
	I0213 23:08:55.947978   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:55.948524   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:55.948545   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:55.948469   50565 retry.go:31] will retry after 2.116247229s: waiting for machine to come up
	I0213 23:08:52.713946   49715 crio.go:444] Took 2.499735 seconds to copy over tarball
	I0213 23:08:52.714030   49715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:56.483125   49715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.769061262s)
	I0213 23:08:56.483156   49715 crio.go:451] Took 3.769175 seconds to extract the tarball
	I0213 23:08:56.483167   49715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:56.524290   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:56.576319   49715 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:56.576349   49715 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:56.576435   49715 ssh_runner.go:195] Run: crio config
	I0213 23:08:56.633481   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:08:56.633514   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:56.633537   49715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:56.633561   49715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-083863 NodeName:default-k8s-diff-port-083863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:56.633744   49715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-083863"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:56.633838   49715 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-083863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0213 23:08:56.633930   49715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:56.643018   49715 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:56.643110   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:56.652116   49715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0213 23:08:56.670140   49715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:56.687456   49715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0213 23:08:56.707317   49715 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:56.711339   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:56.726090   49715 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863 for IP: 192.168.39.3
	I0213 23:08:56.726139   49715 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:56.726320   49715 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:56.726381   49715 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:56.726486   49715 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.key
	I0213 23:08:56.755690   49715 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key.599d509e
	I0213 23:08:56.755797   49715 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key
	I0213 23:08:56.755953   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:56.755996   49715 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:56.756008   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:56.756042   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:56.756072   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:56.756104   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:56.756157   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:56.756999   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:56.790072   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:56.821182   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:56.849753   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:56.875241   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:56.901057   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:56.929989   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:56.959488   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:56.991678   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:57.019756   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:57.047743   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:57.078812   49715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:57.097081   49715 ssh_runner.go:195] Run: openssl version
	I0213 23:08:57.103754   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:57.117364   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124069   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124160   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.132252   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:57.145398   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:57.158348   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164091   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164158   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.171693   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:57.185004   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:57.198410   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204432   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204495   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.210331   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:57.221567   49715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:57.226357   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:57.232307   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:57.239034   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:57.245485   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:57.252782   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:57.259406   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:57.265644   49715 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:57.265744   49715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:57.265820   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:57.313129   49715 cri.go:89] found id: ""
	I0213 23:08:57.313210   49715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:57.323716   49715 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:57.323747   49715 kubeadm.go:636] restartCluster start
	I0213 23:08:57.323837   49715 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:57.333805   49715 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.335100   49715 kubeconfig.go:92] found "default-k8s-diff-port-083863" server: "https://192.168.39.3:8444"
	I0213 23:08:57.337669   49715 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:57.347371   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.347434   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.359168   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:53.424206   49120 pod_ready.go:92] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:53.424235   49120 pod_ready.go:81] duration metric: took 5.01002772s waiting for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:53.424249   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:55.432858   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:54.636558   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.636595   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.636612   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.714679   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.714727   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.894910   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.909668   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:54.909716   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.395328   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.401124   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.401155   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.895827   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.901814   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.901848   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.395611   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.402367   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.402404   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.894889   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.900228   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.900267   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.394804   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.404774   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.404811   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.895090   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.902470   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.902527   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:58.395650   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:58.404727   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:08:58.413383   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:08:58.413425   49443 api_server.go:131] duration metric: took 7.518687282s to wait for apiserver health ...
	I0213 23:08:58.413437   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:58.413444   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:58.415682   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:58.417320   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:58.436763   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:58.468658   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:58.482719   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:58.482755   49443 system_pods.go:61] "coredns-5dd5756b68-h86p6" [9d274749-fe12-43c1-b30c-70586c04daf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:58.482762   49443 system_pods.go:61] "etcd-embed-certs-340656" [1fbdd834-b8c1-48c9-aab7-3c72d7012eca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:58.482770   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [3bb1cfb1-8fea-4b7a-a459-a709010ee6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:58.482783   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [f8035337-1819-4b0b-83eb-1992445c0185] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:58.482790   49443 system_pods.go:61] "kube-proxy-swxwt" [2bbc949c-f478-4c01-9e81-884a05a9a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:58.482795   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [923ef614-eef1-4e32-ae83-2e540841060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:58.482831   49443 system_pods.go:61] "metrics-server-57f55c9bc5-lmcwv" [a948cc5d-01b6-4298-a7c7-24d9704497d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:58.482846   49443 system_pods.go:61] "storage-provisioner" [9fc17bde-ff30-4ed7-829c-3d59badd55f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:58.482854   49443 system_pods.go:74] duration metric: took 14.17202ms to wait for pod list to return data ...
	I0213 23:08:58.482865   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:58.487666   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:58.487710   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:58.487723   49443 node_conditions.go:105] duration metric: took 4.851634ms to run NodePressure ...
	I0213 23:08:58.487743   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:59.044504   49443 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088347   49443 kubeadm.go:787] kubelet initialised
	I0213 23:08:59.088379   49443 kubeadm.go:788] duration metric: took 43.842389ms waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088390   49443 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:59.105292   49443 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.067162   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:58.067629   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:58.067662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:58.067589   50565 retry.go:31] will retry after 2.740013841s: waiting for machine to come up
	I0213 23:09:00.811129   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:00.811590   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:00.811623   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:00.811537   50565 retry.go:31] will retry after 3.449503247s: waiting for machine to come up
	I0213 23:08:57.848036   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.848128   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.863924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.348357   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.348539   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.364081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.848249   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.848321   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.860671   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.348282   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.348385   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.364226   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.847737   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.847838   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.864832   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.348231   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.348311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.360532   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.848115   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.848220   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.861558   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.348101   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.348192   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.360173   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.847696   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.847788   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.859631   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:02.348255   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.348353   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.363081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.943272   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:58.432531   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:58.432613   49120 pod_ready.go:81] duration metric: took 5.008354336s waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.432631   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:00.441099   49120 pod_ready.go:102] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:01.440207   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.440235   49120 pod_ready.go:81] duration metric: took 3.0075951s waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.440249   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446456   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.446483   49120 pod_ready.go:81] duration metric: took 6.224957ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446495   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452476   49120 pod_ready.go:92] pod "kube-proxy-msxf7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.452509   49120 pod_ready.go:81] duration metric: took 6.006176ms waiting for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452520   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457619   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.457640   49120 pod_ready.go:81] duration metric: took 5.112826ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457648   49120 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.113738   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:03.114003   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.262520   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:04.262989   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:04.263018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:04.262939   50565 retry.go:31] will retry after 3.540479459s: waiting for machine to come up
	I0213 23:09:02.847964   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.848073   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.863100   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.347510   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.347608   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.362561   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.847536   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.847635   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.863357   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.347939   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.348026   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.363027   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.847491   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.847576   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.858924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.347449   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.347527   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.359307   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.847845   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.847934   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.859530   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.348136   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.348231   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.360149   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.847699   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.847786   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.859859   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.347717   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:07.347806   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:07.360175   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.360211   49715 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:07.360223   49715 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:07.360234   49715 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:07.360304   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:07.400269   49715 cri.go:89] found id: ""
	I0213 23:09:07.400360   49715 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:07.416990   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:07.426513   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:07.426588   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436165   49715 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436197   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:07.602305   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:03.467176   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:05.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.614199   49443 pod_ready.go:92] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:04.614230   49443 pod_ready.go:81] duration metric: took 5.508903545s waiting for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:04.614244   49443 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:06.621198   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:08.622226   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:07.807018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:07.807577   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:07.807609   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:07.807519   50565 retry.go:31] will retry after 4.623412618s: waiting for machine to come up
	I0213 23:09:08.566096   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.757816   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.894570   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.984493   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:08.984609   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.485363   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.984792   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.485221   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.985649   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.485311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.516028   49715 api_server.go:72] duration metric: took 2.531534981s to wait for apiserver process to appear ...
	I0213 23:09:11.516054   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:11.516076   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:08.466006   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.965586   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.623965   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.623991   49443 pod_ready.go:81] duration metric: took 6.009738992s waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.624002   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631790   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.631813   49443 pod_ready.go:81] duration metric: took 7.802592ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631830   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638042   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.638065   49443 pod_ready.go:81] duration metric: took 6.226067ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638077   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645111   49443 pod_ready.go:92] pod "kube-proxy-swxwt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.645135   49443 pod_ready.go:81] duration metric: took 7.051124ms waiting for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645146   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651681   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.651703   49443 pod_ready.go:81] duration metric: took 6.550486ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651712   49443 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:12.659172   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:12.435133   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435720   49036 main.go:141] libmachine: (old-k8s-version-245122) Found IP for machine: 192.168.50.36
	I0213 23:09:12.435751   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has current primary IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435762   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserving static IP address...
	I0213 23:09:12.436196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.436241   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | skip adding static IP to network mk-old-k8s-version-245122 - found existing host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"}
	I0213 23:09:12.436262   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserved static IP address: 192.168.50.36
	I0213 23:09:12.436280   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting for SSH to be available...
	I0213 23:09:12.436296   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Getting to WaitForSSH function...
	I0213 23:09:12.438534   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.438892   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.438925   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.439062   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH client type: external
	I0213 23:09:12.439099   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa (-rw-------)
	I0213 23:09:12.439149   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:09:12.439183   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | About to run SSH command:
	I0213 23:09:12.439202   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | exit 0
	I0213 23:09:12.541930   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | SSH cmd err, output: <nil>: 
	I0213 23:09:12.542357   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetConfigRaw
	I0213 23:09:12.543071   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.546226   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546714   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.546747   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546955   49036 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/config.json ...
	I0213 23:09:12.547163   49036 machine.go:88] provisioning docker machine ...
	I0213 23:09:12.547200   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:12.547445   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547594   49036 buildroot.go:166] provisioning hostname "old-k8s-version-245122"
	I0213 23:09:12.547615   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547770   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.550250   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.550734   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550939   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.551160   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551322   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.551648   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.551974   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.552000   49036 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245122 && echo "old-k8s-version-245122" | sudo tee /etc/hostname
	I0213 23:09:12.705495   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245122
	
	I0213 23:09:12.705528   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.708503   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.708860   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.708893   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.709092   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.709277   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709657   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.709831   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.710263   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.710285   49036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245122/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:09:12.858225   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:09:12.858266   49036 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:09:12.858287   49036 buildroot.go:174] setting up certificates
	I0213 23:09:12.858300   49036 provision.go:83] configureAuth start
	I0213 23:09:12.858313   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.858624   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.861374   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861727   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.861759   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.864007   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864334   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.864370   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864549   49036 provision.go:138] copyHostCerts
	I0213 23:09:12.864627   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:09:12.864643   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:09:12.864728   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:09:12.864853   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:09:12.864868   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:09:12.864904   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:09:12.865008   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:09:12.865018   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:09:12.865049   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:09:12.865130   49036 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245122 san=[192.168.50.36 192.168.50.36 localhost 127.0.0.1 minikube old-k8s-version-245122]
	I0213 23:09:12.938444   49036 provision.go:172] copyRemoteCerts
	I0213 23:09:12.938508   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:09:12.938530   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.941384   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.941758   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941989   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.942202   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.942394   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.942545   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.041212   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:09:13.069849   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 23:09:13.092979   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:09:13.115949   49036 provision.go:86] duration metric: configureAuth took 257.625697ms
	I0213 23:09:13.115983   49036 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:09:13.116196   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:13.116279   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.119207   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119644   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.119684   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119901   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.120096   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120288   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120443   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.120599   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.121149   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.121179   49036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:09:13.453399   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:09:13.453431   49036 machine.go:91] provisioned docker machine in 906.25243ms
	I0213 23:09:13.453444   49036 start.go:300] post-start starting for "old-k8s-version-245122" (driver="kvm2")
	I0213 23:09:13.453459   49036 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:09:13.453479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.453816   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:09:13.453849   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.457033   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457355   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.457388   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457560   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.457778   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.457991   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.458207   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.559903   49036 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:09:13.566012   49036 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:09:13.566046   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:09:13.566119   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:09:13.566215   49036 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:09:13.566336   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:09:13.578878   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:13.610396   49036 start.go:303] post-start completed in 156.935564ms
	I0213 23:09:13.610434   49036 fix.go:56] fixHost completed within 25.25543712s
	I0213 23:09:13.610459   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.613960   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614271   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.614330   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614575   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.614828   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615081   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615275   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.615494   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.615954   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.615977   49036 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:09:13.759068   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865753.693690059
	
	I0213 23:09:13.759095   49036 fix.go:206] guest clock: 1707865753.693690059
	I0213 23:09:13.759106   49036 fix.go:219] Guest: 2024-02-13 23:09:13.693690059 +0000 UTC Remote: 2024-02-13 23:09:13.610438113 +0000 UTC m=+362.380845041 (delta=83.251946ms)
	I0213 23:09:13.759130   49036 fix.go:190] guest clock delta is within tolerance: 83.251946ms
	I0213 23:09:13.759136   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 25.404173426s
	I0213 23:09:13.759161   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.759480   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:13.762537   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.762928   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.762967   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.763172   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763718   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763907   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763998   49036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:09:13.764050   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.764122   49036 ssh_runner.go:195] Run: cat /version.json
	I0213 23:09:13.764149   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.767081   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767387   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767526   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767558   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767736   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.767812   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767834   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.768002   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.768190   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768220   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768343   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768370   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.768490   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.886145   49036 ssh_runner.go:195] Run: systemctl --version
	I0213 23:09:13.892222   49036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:09:14.044107   49036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:09:14.051031   49036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:09:14.051134   49036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:09:14.071908   49036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:09:14.071942   49036 start.go:475] detecting cgroup driver to use...
	I0213 23:09:14.072026   49036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:09:14.091007   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:09:14.105419   49036 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:09:14.105501   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:09:14.120760   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:09:14.135296   49036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:09:14.267338   49036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:09:14.403936   49036 docker.go:233] disabling docker service ...
	I0213 23:09:14.404023   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:09:14.419791   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:09:14.434449   49036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:09:14.569365   49036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:09:14.700619   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:09:14.718646   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:09:14.738870   49036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0213 23:09:14.738944   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.750436   49036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:09:14.750529   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.762397   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.773950   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.786798   49036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:09:14.801457   49036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:09:14.813254   49036 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:09:14.813331   49036 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:09:14.830374   49036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:09:14.840984   49036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:09:14.994777   49036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:09:15.193564   49036 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:09:15.193657   49036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:09:15.200616   49036 start.go:543] Will wait 60s for crictl version
	I0213 23:09:15.200749   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:15.205888   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:09:15.249751   49036 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:09:15.249884   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.302320   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.361046   49036 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0213 23:09:15.362396   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:15.365548   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366008   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:15.366041   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366287   49036 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:09:15.370727   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:15.384064   49036 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:09:15.384171   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:15.432027   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:15.432110   49036 ssh_runner.go:195] Run: which lz4
	I0213 23:09:15.436393   49036 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:09:15.440914   49036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:09:15.440956   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0213 23:09:15.218410   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:15.218442   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:15.218457   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.346077   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.346112   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:15.516188   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.523339   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.523371   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.016747   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.024910   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.024944   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.516538   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.528640   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.528673   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:17.016269   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:17.022413   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:09:17.033775   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:09:17.033807   49715 api_server.go:131] duration metric: took 5.51774459s to wait for apiserver health ...
	I0213 23:09:17.033819   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:09:17.033828   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:17.035635   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:17.037195   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:17.064472   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:17.115519   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:17.133771   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:09:17.133887   49715 system_pods.go:61] "coredns-5dd5756b68-cvtjg" [507ded52-9061-4ab7-8298-31847da5dad3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:09:17.133914   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [2ef46644-d4d0-4e8c-b2aa-4e154780be70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:09:17.133952   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [c1f51407-cfd9-4329-9153-2dacb87952c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:09:17.133975   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [1ad24825-8c75-4220-a316-2dd4826da8fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:09:17.133995   49715 system_pods.go:61] "kube-proxy-zzskr" [fb71ceb1-9f9a-4c8b-ae1e-1eeb91706110] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:09:17.134015   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [4500697c-7313-4217-9843-14edb2c7fdb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:09:17.134042   49715 system_pods.go:61] "metrics-server-57f55c9bc5-p97jh" [dc549bc9-87e4-4cb6-99b5-e937f2916d6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:09:17.134063   49715 system_pods.go:61] "storage-provisioner" [c5ad957d-09f9-46e7-b0e7-e7c0b13f671f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:09:17.134081   49715 system_pods.go:74] duration metric: took 18.533785ms to wait for pod list to return data ...
	I0213 23:09:17.134103   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:17.145025   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:17.145131   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:17.145159   49715 node_conditions.go:105] duration metric: took 11.041762ms to run NodePressure ...
	I0213 23:09:17.145201   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:13.466367   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:15.966324   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:14.661158   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:16.663448   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:19.164418   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.224597   49036 crio.go:444] Took 1.788234 seconds to copy over tarball
	I0213 23:09:17.224685   49036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:09:20.618866   49036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.394137292s)
	I0213 23:09:20.618905   49036 crio.go:451] Took 3.394273 seconds to extract the tarball
	I0213 23:09:20.618918   49036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:09:20.665417   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:20.718004   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:20.718036   49036 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.718175   49036 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.718201   49036 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.718126   49036 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.718148   49036 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.718154   49036 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.718181   49036 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719739   49036 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719784   49036 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.719745   49036 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.719855   49036 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.719951   49036 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.720062   49036 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 23:09:20.720172   49036 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.720184   49036 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.877532   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.894803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.906336   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.909341   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.910608   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 23:09:20.933612   49036 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 23:09:20.933664   49036 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.933724   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:20.947803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.979922   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.026909   49036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 23:09:21.026953   49036 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.026986   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.034243   49036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 23:09:21.034279   49036 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.034321   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.053547   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:21.068143   49036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 23:09:21.068194   49036 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 23:09:21.068228   49036 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0213 23:09:21.068195   49036 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0213 23:09:21.068318   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.110630   49036 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 23:09:21.110695   49036 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.110747   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.120732   49036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 23:09:21.120777   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.120781   49036 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.120851   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.120887   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.272660   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0213 23:09:21.272723   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 23:09:21.272771   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.272813   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.272858   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 23:09:21.272914   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.272966   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 23:09:17.706218   49715 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713293   49715 kubeadm.go:787] kubelet initialised
	I0213 23:09:17.713322   49715 kubeadm.go:788] duration metric: took 7.076014ms waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713332   49715 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:17.724146   49715 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:19.733686   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.412892   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.970757   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:20.466081   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.467149   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.660264   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:23.660813   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.375314   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 23:09:21.376306   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 23:09:21.376453   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 23:09:21.376491   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 23:09:21.585135   49036 cache_images.go:92] LoadImages completed in 867.071904ms
	W0213 23:09:21.585230   49036 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0213 23:09:21.585316   49036 ssh_runner.go:195] Run: crio config
	I0213 23:09:21.650741   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:21.650767   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:21.650789   49036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:09:21.650812   49036 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245122 NodeName:old-k8s-version-245122 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 23:09:21.650991   49036 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-245122"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-245122
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.36:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:09:21.651106   49036 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-245122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:09:21.651173   49036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 23:09:21.662478   49036 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:09:21.662558   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:09:21.672654   49036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0213 23:09:21.690609   49036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:09:21.708199   49036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0213 23:09:21.728361   49036 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0213 23:09:21.732450   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:21.747349   49036 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122 for IP: 192.168.50.36
	I0213 23:09:21.747391   49036 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:21.747532   49036 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:09:21.747582   49036 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:09:21.747644   49036 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.key
	I0213 23:09:21.958574   49036 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key.e3c4a843
	I0213 23:09:21.958790   49036 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key
	I0213 23:09:21.958978   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:09:21.959024   49036 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:09:21.959040   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:09:21.959090   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:09:21.959135   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:09:21.959168   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:09:21.959234   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:21.960121   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:09:21.986921   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:09:22.011993   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:09:22.038194   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:09:22.064839   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:09:22.089629   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:09:22.116404   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:09:22.141615   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:09:22.167298   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:09:22.194577   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:09:22.220140   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:09:22.245124   49036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:09:22.265798   49036 ssh_runner.go:195] Run: openssl version
	I0213 23:09:22.273510   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:09:22.287657   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294180   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294261   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.300826   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:09:22.313535   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:09:22.324047   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329069   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329171   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.335862   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:09:22.347417   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:09:22.358082   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363477   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363536   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.369915   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:09:22.380910   49036 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:09:22.385812   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:09:22.392981   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:09:22.400722   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:09:22.409089   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:09:22.417036   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:09:22.423381   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:09:22.430098   49036 kubeadm.go:404] StartCluster: {Name:old-k8s-version-245122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:09:22.430177   49036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:09:22.430246   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:22.490283   49036 cri.go:89] found id: ""
	I0213 23:09:22.490371   49036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:09:22.500902   49036 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:09:22.500931   49036 kubeadm.go:636] restartCluster start
	I0213 23:09:22.501004   49036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:09:22.511985   49036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:22.513298   49036 kubeconfig.go:92] found "old-k8s-version-245122" server: "https://192.168.50.36:8443"
	I0213 23:09:22.516673   49036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:09:22.526466   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:22.526561   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:22.539541   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.027052   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.027161   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.039390   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.527142   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.527234   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.539846   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.027048   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.027144   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.038367   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.526911   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.527012   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.538906   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.027095   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.027195   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.038232   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.526805   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.526911   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.540281   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:26.026811   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.026908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.039699   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.238007   49715 pod_ready.go:92] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:23.238035   49715 pod_ready.go:81] duration metric: took 5.513854942s waiting for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:23.238051   49715 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.744985   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:24.745007   49715 pod_ready.go:81] duration metric: took 1.506948533s waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.745015   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:26.751610   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:24.965048   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:27.465069   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.159564   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:28.660224   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.527051   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.527135   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.539382   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.026915   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.026990   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.038660   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.527300   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.527391   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.539714   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.027042   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.027124   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.039419   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.527549   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.527649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.540659   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.027032   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.027134   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.038415   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.526595   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.526690   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.538928   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.027041   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.027119   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.040125   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.526693   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.526765   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.540060   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:31.026988   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.027096   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.039327   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.755419   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.254128   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.254154   49715 pod_ready.go:81] duration metric: took 6.509132102s waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.254164   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262007   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.262032   49715 pod_ready.go:81] duration metric: took 7.859557ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262042   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267937   49715 pod_ready.go:92] pod "kube-proxy-zzskr" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.267959   49715 pod_ready.go:81] duration metric: took 5.911683ms waiting for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267967   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273442   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.273462   49715 pod_ready.go:81] duration metric: took 5.488135ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273471   49715 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:29.466908   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.965093   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.159176   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.159463   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.526738   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.526879   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.539174   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.026678   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.026780   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.039078   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.527030   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.527120   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.539058   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.539094   49036 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:32.539105   49036 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:32.539116   49036 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:32.539188   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:32.583832   49036 cri.go:89] found id: ""
	I0213 23:09:32.583931   49036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:32.600343   49036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:32.609666   49036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:32.609744   49036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619068   49036 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619093   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:32.751642   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:33.784796   49036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03311496s)
	I0213 23:09:33.784825   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.013311   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.172539   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.290655   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:34.290759   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:34.791649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.290908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.791035   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:33.283651   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.798120   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.966930   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.465311   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.160502   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:37.163077   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.291009   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.791117   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.809796   49036 api_server.go:72] duration metric: took 2.519141205s to wait for apiserver process to appear ...
	I0213 23:09:36.809851   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:36.809880   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:38.282180   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.282368   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:38.466126   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.967293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.811101   49036 api_server.go:269] stopped: https://192.168.50.36:8443/healthz: Get "https://192.168.50.36:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 23:09:41.811184   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.485465   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.485495   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.485516   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.539632   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.539667   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.809967   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.823007   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:42.823043   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.310359   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.318326   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:43.318384   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.809942   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.816666   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:09:43.824593   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:09:43.824622   49036 api_server.go:131] duration metric: took 7.014763564s to wait for apiserver health ...
	I0213 23:09:43.824639   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:43.824647   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:43.826660   49036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:39.659667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.660321   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.664984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.827993   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:43.837268   49036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:43.855659   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:43.864719   49036 system_pods.go:59] 7 kube-system pods found
	I0213 23:09:43.864756   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:09:43.864764   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:09:43.864770   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:09:43.864778   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Pending
	I0213 23:09:43.864783   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:09:43.864789   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:09:43.864795   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:09:43.864803   49036 system_pods.go:74] duration metric: took 9.113954ms to wait for pod list to return data ...
	I0213 23:09:43.864812   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:43.872183   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:43.872222   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:43.872237   49036 node_conditions.go:105] duration metric: took 7.415138ms to run NodePressure ...
	I0213 23:09:43.872269   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:44.129786   49036 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134864   49036 kubeadm.go:787] kubelet initialised
	I0213 23:09:44.134891   49036 kubeadm.go:788] duration metric: took 5.071047ms waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134901   49036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:44.139027   49036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.143942   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143967   49036 pod_ready.go:81] duration metric: took 4.910454ms waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.143978   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143986   49036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.147838   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147923   49036 pod_ready.go:81] duration metric: took 3.927311ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.147935   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147944   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.152465   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152490   49036 pod_ready.go:81] duration metric: took 4.536109ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.152500   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152508   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.259273   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259309   49036 pod_ready.go:81] duration metric: took 106.789068ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.259325   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259334   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.659385   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659423   49036 pod_ready.go:81] duration metric: took 400.079528ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.659436   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659443   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:45.065474   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065510   49036 pod_ready.go:81] duration metric: took 406.055078ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:45.065524   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065533   49036 pod_ready.go:38] duration metric: took 930.621868ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:45.065555   49036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:09:45.100009   49036 ops.go:34] apiserver oom_adj: -16
	I0213 23:09:45.100037   49036 kubeadm.go:640] restartCluster took 22.599099367s
	I0213 23:09:45.100049   49036 kubeadm.go:406] StartCluster complete in 22.6699561s
	I0213 23:09:45.100070   49036 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.100156   49036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:09:45.103031   49036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.103315   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:09:45.103447   49036 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:09:45.103540   49036 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103562   49036 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-245122"
	I0213 23:09:45.103571   49036 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103593   49036 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:45.103603   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:45.103638   49036 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103693   49036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245122"
	W0213 23:09:45.103608   49036 addons.go:243] addon metrics-server should already be in state true
	W0213 23:09:45.103577   49036 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:09:45.103879   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104144   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104215   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104227   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.104318   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.103829   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104877   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104904   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.123332   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0213 23:09:45.123486   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0213 23:09:45.123555   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0213 23:09:45.123964   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124143   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124148   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124449   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124469   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124650   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124674   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124654   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124743   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124965   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125030   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125083   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.125564   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125567   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125598   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.125612   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.129046   49036 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-245122"
	W0213 23:09:45.129065   49036 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:09:45.129085   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.129385   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.129415   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.145900   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0213 23:09:45.146570   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.147144   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.147164   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.147448   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.147635   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.156023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.158533   49036 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:09:45.159815   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:09:45.159837   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:09:45.159862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.163799   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164445   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.164472   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164859   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.165112   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.165340   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.165523   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.166097   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0213 23:09:45.166513   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.167086   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.167111   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.167442   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.167623   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.168284   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0213 23:09:45.168855   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.169453   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.169471   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.169702   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.169992   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.171532   49036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:45.170687   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.172965   49036 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.172979   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.172983   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:09:45.173009   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.176733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177198   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.177232   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177269   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.177506   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.177675   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.177885   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.190339   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0213 23:09:45.190750   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.191239   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.191267   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.191609   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.191803   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.193470   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.193730   49036 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.193748   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:09:45.193769   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.196896   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197422   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.197459   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197745   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.197935   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.198191   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.198301   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.392787   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:09:45.392808   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:09:45.426298   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.440984   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.452209   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:09:45.452239   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:09:45.531203   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:45.531226   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:09:45.593779   49036 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 23:09:45.621016   49036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-245122" context rescaled to 1 replicas
	I0213 23:09:45.621056   49036 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:09:45.623081   49036 out.go:177] * Verifying Kubernetes components...
	I0213 23:09:45.624623   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:09:45.631546   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:46.116692   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116732   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.116735   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116736   49036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:46.116754   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117125   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117172   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117183   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117192   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117201   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117203   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117218   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117228   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117247   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117667   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117671   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117708   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117728   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117962   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117980   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140111   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.140133   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.140411   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.140441   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140431   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.228877   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.228908   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229250   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229273   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229273   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.229283   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.229293   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229523   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229538   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229558   49036 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:46.231176   49036 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:09:46.232329   49036 addons.go:505] enable addons completed in 1.128872958s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:09:42.783163   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:44.783634   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.281934   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.465665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:45.964909   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:46.160084   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.664267   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.120153   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:50.120636   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:49.781808   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.281392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.968701   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:50.465488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:51.161059   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:53.662099   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.121578   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:53.120859   49036 node_ready.go:49] node "old-k8s-version-245122" has status "Ready":"True"
	I0213 23:09:53.120885   49036 node_ready.go:38] duration metric: took 7.004121529s waiting for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:53.120896   49036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:53.129174   49036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:55.136200   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.283011   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.286197   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.964530   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.964679   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.966183   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.159475   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.160233   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:57.636373   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.137616   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.782611   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:59.465313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.465877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.660202   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.159244   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:02.635052   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:04.636231   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.284083   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.781701   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.966234   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.465225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.160136   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.160817   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.161703   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.636789   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.135398   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.135441   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.782000   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.782948   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.785161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:08.465688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:10.967225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.658937   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.661460   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.138346   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.636437   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:14.282538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.781339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.465521   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.965224   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.162065   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:18.658525   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.648838   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.137226   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:19.282514   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:21.781917   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.966716   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.464644   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.465071   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.659514   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.662481   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.636371   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.136197   49036 pod_ready.go:92] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.136234   49036 pod_ready.go:81] duration metric: took 31.007029263s waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.136249   49036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142089   49036 pod_ready.go:92] pod "etcd-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.142114   49036 pod_ready.go:81] duration metric: took 5.854061ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142127   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149372   49036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.149396   49036 pod_ready.go:81] duration metric: took 7.261015ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149409   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158342   49036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.158371   49036 pod_ready.go:81] duration metric: took 8.953577ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158384   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165154   49036 pod_ready.go:92] pod "kube-proxy-nj7qx" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.165177   49036 pod_ready.go:81] duration metric: took 6.785683ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165186   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533838   49036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.533863   49036 pod_ready.go:81] duration metric: took 368.670292ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533896   49036 pod_ready.go:38] duration metric: took 31.412988042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:10:24.533912   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:10:24.534007   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:10:24.549186   49036 api_server.go:72] duration metric: took 38.928101792s to wait for apiserver process to appear ...
	I0213 23:10:24.549217   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:10:24.549238   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:10:24.557366   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:10:24.558364   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:10:24.558387   49036 api_server.go:131] duration metric: took 9.165129ms to wait for apiserver health ...
	I0213 23:10:24.558396   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:10:24.736365   49036 system_pods.go:59] 8 kube-system pods found
	I0213 23:10:24.736396   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:24.736401   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:24.736405   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:24.736409   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:24.736413   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:24.736417   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:24.736423   49036 system_pods.go:61] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:24.736429   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:24.736437   49036 system_pods.go:74] duration metric: took 178.035411ms to wait for pod list to return data ...
	I0213 23:10:24.736444   49036 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:10:24.934360   49036 default_sa.go:45] found service account: "default"
	I0213 23:10:24.934390   49036 default_sa.go:55] duration metric: took 197.940334ms for default service account to be created ...
	I0213 23:10:24.934400   49036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:10:25.135904   49036 system_pods.go:86] 8 kube-system pods found
	I0213 23:10:25.135933   49036 system_pods.go:89] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:25.135940   49036 system_pods.go:89] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:25.135944   49036 system_pods.go:89] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:25.135949   49036 system_pods.go:89] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:25.135954   49036 system_pods.go:89] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:25.135959   49036 system_pods.go:89] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:25.135967   49036 system_pods.go:89] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:25.135973   49036 system_pods.go:89] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:25.135982   49036 system_pods.go:126] duration metric: took 201.576732ms to wait for k8s-apps to be running ...
	I0213 23:10:25.135992   49036 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:10:25.136035   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:10:25.151540   49036 system_svc.go:56] duration metric: took 15.53628ms WaitForService to wait for kubelet.
	I0213 23:10:25.151582   49036 kubeadm.go:581] duration metric: took 39.530502672s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:10:25.151608   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:10:25.333026   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:10:25.333067   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:10:25.333083   49036 node_conditions.go:105] duration metric: took 181.468311ms to run NodePressure ...
	I0213 23:10:25.333171   49036 start.go:228] waiting for startup goroutines ...
	I0213 23:10:25.333186   49036 start.go:233] waiting for cluster config update ...
	I0213 23:10:25.333200   49036 start.go:242] writing updated cluster config ...
	I0213 23:10:25.333540   49036 ssh_runner.go:195] Run: rm -f paused
	I0213 23:10:25.385974   49036 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0213 23:10:25.388225   49036 out.go:177] 
	W0213 23:10:25.389965   49036 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0213 23:10:25.391288   49036 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0213 23:10:25.392550   49036 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-245122" cluster and "default" namespace by default
	I0213 23:10:24.281840   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.782341   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.467427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.965363   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:25.158811   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:27.158903   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.162245   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.283592   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.781156   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.465534   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.965570   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.163299   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.664184   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:34.281475   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.282050   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.966548   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.465588   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.159425   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.161056   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.781806   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.782565   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.465618   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.966613   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.659031   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.660105   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:43.282453   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.782436   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.967065   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.465277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.161783   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.659092   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:48.281903   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:50.782326   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.965978   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.972688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:52.464489   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.661150   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:51.661183   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.159746   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:53.280877   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:55.281432   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.465386   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.966020   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.659863   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.161127   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:57.781250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:00.283244   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.464959   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.466871   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.660636   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:04.162081   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:02.782971   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.282593   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:03.964986   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.967545   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:06.660761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.663916   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:07.783437   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.280975   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.281595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.466954   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.965354   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:11.159761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:13.160656   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:14.281819   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:16.781331   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.965830   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.464980   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.659894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.659996   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:18.782849   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.281343   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.965490   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.965841   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:22.465427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.660194   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.660348   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.158929   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:23.281731   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:25.282299   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.966008   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.463392   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:26.160687   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:28.160792   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.783770   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.282652   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:29.464941   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:31.965436   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.160850   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.661971   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.781595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.282110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:33.966260   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:36.465148   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.160093   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.160571   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.782870   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.281536   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:38.466898   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.965121   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:39.659930   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.160848   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.782134   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.287871   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.966494   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:45.465485   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.477988   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.659259   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:46.660566   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.165414   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.781501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.282150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.965827   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.465337   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:51.658915   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.160444   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.286142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.783072   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.465900   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.466029   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.659103   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.660419   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.784481   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.282749   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.965179   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.465662   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:00.661165   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.161035   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.787946   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:06.281932   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.964460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.966240   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.660384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.159544   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.781709   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.782556   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.465300   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.472665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.660651   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.159097   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.281500   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.781953   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:12.965510   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:14.966435   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.465559   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.160583   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.659605   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.784167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:20.280384   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:22.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.468825   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.965088   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.659644   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.662561   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.160923   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.781351   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:27.281938   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:23.966646   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.465094   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.160986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.161300   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:29.780690   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.282298   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.965450   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:31.467937   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:30.659169   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.659681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.782495   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.782679   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:33.965594   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.465409   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.660174   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.660802   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.160838   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.281205   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.281734   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:38.465702   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:40.965477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.659732   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:44.159873   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:43.780979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.781438   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:42.966342   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.464993   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.465742   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:46.162330   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:48.659964   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.782513   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:50.281255   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:52.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:49.967402   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.968499   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.161451   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:53.659594   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.782653   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.782779   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.465429   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.466199   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:55.659986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:57.661028   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:59.280842   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.281110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:58.965410   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:00.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.458755   49120 pod_ready.go:81] duration metric: took 4m0.00109163s waiting for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:01.458812   49120 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:01.458839   49120 pod_ready.go:38] duration metric: took 4m13.051566827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:01.458873   49120 kubeadm.go:640] restartCluster took 4m33.496925279s
	W0213 23:13:01.458967   49120 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:01.459008   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:00.160188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:02.663549   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:03.285939   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.782469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.165196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:07.661417   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:08.283394   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.286257   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.161461   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.652828   49443 pod_ready.go:81] duration metric: took 4m0.001101625s waiting for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:10.652857   49443 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:10.652877   49443 pod_ready.go:38] duration metric: took 4m11.564476633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:10.652905   49443 kubeadm.go:640] restartCluster took 4m34.344806193s
	W0213 23:13:10.652970   49443 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:10.652997   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:12.782042   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:15.282782   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:16.418651   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.959611919s)
	I0213 23:13:16.418750   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:16.435137   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:16.448436   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:16.459777   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:16.459826   49120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:16.708111   49120 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:17.782474   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:20.283238   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:22.782418   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:24.782894   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:26.784203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:28.667785   49120 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0213 23:13:28.667865   49120 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:28.668000   49120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:28.668151   49120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:28.668282   49120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:28.668372   49120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:28.670147   49120 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:28.670266   49120 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:28.670367   49120 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:28.670480   49120 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:28.670559   49120 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:28.670674   49120 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:28.670763   49120 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:28.670864   49120 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:28.670964   49120 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:28.671068   49120 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:28.671163   49120 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:28.671221   49120 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:28.671296   49120 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:28.671368   49120 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:28.671440   49120 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0213 23:13:28.671506   49120 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:28.671580   49120 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:28.671658   49120 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:28.671734   49120 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:28.671791   49120 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:28.673351   49120 out.go:204]   - Booting up control plane ...
	I0213 23:13:28.673448   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:28.673535   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:28.673627   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:28.673744   49120 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:28.673846   49120 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:28.673903   49120 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:28.674084   49120 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:28.674176   49120 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.010705 seconds
	I0213 23:13:28.674315   49120 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:28.674470   49120 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:28.674543   49120 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:28.674766   49120 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-778731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:28.674832   49120 kubeadm.go:322] [bootstrap-token] Using token: dwjaqi.e4fr4bxqfdq63m9e
	I0213 23:13:28.676266   49120 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:28.676392   49120 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:28.676495   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:28.676671   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:28.676871   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:28.677028   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:28.677142   49120 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:28.677283   49120 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:28.677337   49120 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:28.677392   49120 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:28.677405   49120 kubeadm.go:322] 
	I0213 23:13:28.677476   49120 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:28.677488   49120 kubeadm.go:322] 
	I0213 23:13:28.677586   49120 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:28.677599   49120 kubeadm.go:322] 
	I0213 23:13:28.677631   49120 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:28.677712   49120 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:28.677780   49120 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:28.677793   49120 kubeadm.go:322] 
	I0213 23:13:28.677864   49120 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:28.677881   49120 kubeadm.go:322] 
	I0213 23:13:28.677941   49120 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:28.677948   49120 kubeadm.go:322] 
	I0213 23:13:28.678019   49120 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:28.678125   49120 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:28.678215   49120 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:28.678223   49120 kubeadm.go:322] 
	I0213 23:13:28.678324   49120 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:28.678426   49120 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:28.678433   49120 kubeadm.go:322] 
	I0213 23:13:28.678544   49120 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.678685   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:28.678714   49120 kubeadm.go:322] 	--control-plane 
	I0213 23:13:28.678722   49120 kubeadm.go:322] 
	I0213 23:13:28.678834   49120 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:28.678841   49120 kubeadm.go:322] 
	I0213 23:13:28.678950   49120 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.679094   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:28.679106   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:13:28.679116   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:28.680826   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:25.241610   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.588591305s)
	I0213 23:13:25.241679   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:25.257221   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:25.271651   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:25.285556   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:25.285615   49443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:25.530438   49443 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:29.281713   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:31.274625   49715 pod_ready.go:81] duration metric: took 4m0.00114055s waiting for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:31.274654   49715 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:31.274676   49715 pod_ready.go:38] duration metric: took 4m13.561333764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:31.274700   49715 kubeadm.go:640] restartCluster took 4m33.95094669s
	W0213 23:13:31.274766   49715 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:31.274807   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:28.682020   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:28.710027   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:28.752989   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:28.753118   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:28.753117   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=no-preload-778731 minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.147657   49120 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:29.147806   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.647920   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.648105   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.148819   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.648877   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.647939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.005257   49443 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:37.005340   49443 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:37.005464   49443 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:37.005611   49443 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:37.005750   49443 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:37.005836   49443 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:37.007501   49443 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:37.007606   49443 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:37.007687   49443 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:37.007782   49443 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:37.007869   49443 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:37.007960   49443 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:37.008047   49443 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:37.008139   49443 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:37.008221   49443 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:37.008324   49443 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:37.008437   49443 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:37.008488   49443 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:37.008577   49443 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:37.008657   49443 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:37.008742   49443 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:37.008837   49443 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:37.008916   49443 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:37.009044   49443 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:37.009150   49443 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:37.010808   49443 out.go:204]   - Booting up control plane ...
	I0213 23:13:37.010943   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:37.011053   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:37.011155   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:37.011537   49443 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:37.011661   49443 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:37.011720   49443 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:37.011915   49443 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:37.012024   49443 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005842 seconds
	I0213 23:13:37.012154   49443 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:37.012297   49443 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:37.012376   49443 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:37.012595   49443 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-340656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:37.012668   49443 kubeadm.go:322] [bootstrap-token] Using token: 0y2cx5.j4vucgv3wtut6xkw
	I0213 23:13:37.014296   49443 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:37.014433   49443 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:37.014535   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:37.014697   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:37.014837   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:37.014966   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:37.015073   49443 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:37.015203   49443 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:37.015256   49443 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:37.015316   49443 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:37.015326   49443 kubeadm.go:322] 
	I0213 23:13:37.015393   49443 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:37.015403   49443 kubeadm.go:322] 
	I0213 23:13:37.015500   49443 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:37.015511   49443 kubeadm.go:322] 
	I0213 23:13:37.015535   49443 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:37.015603   49443 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:37.015668   49443 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:37.015677   49443 kubeadm.go:322] 
	I0213 23:13:37.015744   49443 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:37.015754   49443 kubeadm.go:322] 
	I0213 23:13:37.015814   49443 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:37.015824   49443 kubeadm.go:322] 
	I0213 23:13:37.015889   49443 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:37.015981   49443 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:37.016075   49443 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:37.016087   49443 kubeadm.go:322] 
	I0213 23:13:37.016182   49443 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:37.016272   49443 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:37.016282   49443 kubeadm.go:322] 
	I0213 23:13:37.016371   49443 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016486   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:37.016522   49443 kubeadm.go:322] 	--control-plane 
	I0213 23:13:37.016527   49443 kubeadm.go:322] 
	I0213 23:13:37.016637   49443 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:37.016643   49443 kubeadm.go:322] 
	I0213 23:13:37.016739   49443 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016875   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:37.016887   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:13:37.016895   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:37.018483   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:33.148023   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:33.648861   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.147939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.648160   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.148620   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.648710   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.148263   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.648202   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.148597   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.648067   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.019795   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:37.080689   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:37.145132   49443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:37.145273   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.145374   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=embed-certs-340656 minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.195322   49443 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:37.575387   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.075523   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.575550   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.075996   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.148294   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.648747   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.148671   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.648021   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.148566   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.648799   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.148354   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.257502   49120 kubeadm.go:1088] duration metric: took 12.504501087s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:41.257549   49120 kubeadm.go:406] StartCluster complete in 5m13.347836612s
	I0213 23:13:41.257573   49120 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.257681   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:41.260299   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.260647   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:41.260677   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:41.260755   49120 addons.go:69] Setting storage-provisioner=true in profile "no-preload-778731"
	I0213 23:13:41.260779   49120 addons.go:234] Setting addon storage-provisioner=true in "no-preload-778731"
	W0213 23:13:41.260787   49120 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:41.260777   49120 addons.go:69] Setting metrics-server=true in profile "no-preload-778731"
	I0213 23:13:41.260807   49120 addons.go:234] Setting addon metrics-server=true in "no-preload-778731"
	W0213 23:13:41.260815   49120 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:41.260840   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260858   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260882   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:13:41.261207   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261227   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261267   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261291   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261426   49120 addons.go:69] Setting default-storageclass=true in profile "no-preload-778731"
	I0213 23:13:41.261447   49120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-778731"
	I0213 23:13:41.261807   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261899   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.278449   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0213 23:13:41.278646   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0213 23:13:41.278874   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.278992   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.279367   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279389   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279460   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279485   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279748   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.279929   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.280301   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280345   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280389   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280403   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0213 23:13:41.280420   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280729   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.281302   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.281324   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.281723   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.281932   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.286017   49120 addons.go:234] Setting addon default-storageclass=true in "no-preload-778731"
	W0213 23:13:41.286039   49120 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:41.286067   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.286476   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.286511   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.299018   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0213 23:13:41.299266   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0213 23:13:41.299626   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.299951   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.300111   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300127   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300624   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300656   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300707   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.300885   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.301280   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.301628   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.303270   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.304846   49120 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:41.303809   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.306034   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:41.306048   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:41.306068   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.307731   49120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:41.309028   49120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.309045   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:41.309065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.309214   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0213 23:13:41.309635   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.309722   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310208   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.310227   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.310342   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.310379   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310514   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.310731   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.310877   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.310900   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.311093   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.311466   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.311516   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.312194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312559   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.312580   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312814   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.313006   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.313140   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.313283   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.327021   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0213 23:13:41.327605   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.328038   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.328055   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.328399   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.328596   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.330082   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.330333   49120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.330344   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:41.330356   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.333321   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333703   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.333731   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.334075   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.334494   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.334643   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.502879   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:41.534876   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:41.534908   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:41.587429   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.589619   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.616755   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:41.616783   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:41.688015   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.688039   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:41.777647   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.844418   49120 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-778731" context rescaled to 1 replicas
	I0213 23:13:41.844460   49120 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:41.847252   49120 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:41.848614   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:42.311509   49120 start.go:929] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:42.915046   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.327574246s)
	I0213 23:13:42.915112   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915127   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915219   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.325575731s)
	I0213 23:13:42.915241   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915250   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915430   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.915467   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.915475   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.915485   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915493   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917607   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917640   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917673   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917652   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917719   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917730   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917764   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.917773   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917996   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.918014   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.963310   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.963336   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.963632   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.963652   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999467   49120 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.150816624s)
	I0213 23:13:42.999513   49120 node_ready.go:35] waiting up to 6m0s for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:42.999542   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221849263s)
	I0213 23:13:42.999604   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999620   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.999914   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.999932   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999944   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999953   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:43.000322   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:43.000341   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:43.000355   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:43.000372   49120 addons.go:470] Verifying addon metrics-server=true in "no-preload-778731"
	I0213 23:13:43.003022   49120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:39.575883   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.076191   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.575969   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.075959   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.576297   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.075511   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.575528   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.076112   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.575825   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:44.076340   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.156104   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.881268834s)
	I0213 23:13:46.156183   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:46.173816   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:46.185578   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:46.196865   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:46.196911   49715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:46.251785   49715 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:46.251863   49715 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:46.416331   49715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:46.416503   49715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:46.416643   49715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:46.690351   49715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:46.692352   49715 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:46.692470   49715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:46.692583   49715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:46.692710   49715 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:46.692812   49715 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:46.692929   49715 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:46.693027   49715 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:46.693116   49715 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:46.693220   49715 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:46.693322   49715 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:46.693423   49715 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:46.693480   49715 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:46.693559   49715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:46.919270   49715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:47.096236   49715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:47.207058   49715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:47.262083   49715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:47.262614   49715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:47.265288   49715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:47.267143   49715 out.go:204]   - Booting up control plane ...
	I0213 23:13:47.267277   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:47.267383   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:47.267570   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:47.284718   49715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:47.286027   49715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:47.286152   49715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:47.443974   49715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:43.004170   49120 addons.go:505] enable addons completed in 1.743494195s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:43.030538   49120 node_ready.go:49] node "no-preload-778731" has status "Ready":"True"
	I0213 23:13:43.030566   49120 node_ready.go:38] duration metric: took 31.039482ms waiting for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:43.030581   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:43.041854   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:43.085259   49120 pod_ready.go:97] pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085310   49120 pod_ready.go:81] duration metric: took 43.414984ms waiting for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:43.085328   49120 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085337   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094656   49120 pod_ready.go:92] pod "coredns-76f75df574-f4g5w" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.094686   49120 pod_ready.go:81] duration metric: took 2.009341273s waiting for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094696   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101331   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.101352   49120 pod_ready.go:81] duration metric: took 6.650644ms waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101362   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108662   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.108686   49120 pod_ready.go:81] duration metric: took 7.317621ms waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108695   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115600   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.115620   49120 pod_ready.go:81] duration metric: took 6.918739ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115629   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403942   49120 pod_ready.go:92] pod "kube-proxy-7vcqq" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.403977   49120 pod_ready.go:81] duration metric: took 288.33703ms waiting for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403990   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804609   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.804646   49120 pod_ready.go:81] duration metric: took 400.646621ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804661   49120 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:44.575423   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.076435   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.575498   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.076393   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.575716   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.075439   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.575623   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.076149   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.575619   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.757507   49443 kubeadm.go:1088] duration metric: took 11.612278698s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:48.757567   49443 kubeadm.go:406] StartCluster complete in 5m12.504615736s
	I0213 23:13:48.757592   49443 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.757689   49443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:48.760402   49443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.760794   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:48.761145   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:13:48.761320   49443 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:48.761392   49443 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-340656"
	I0213 23:13:48.761411   49443 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-340656"
	W0213 23:13:48.761420   49443 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:48.761470   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762064   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762094   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762173   49443 addons.go:69] Setting default-storageclass=true in profile "embed-certs-340656"
	I0213 23:13:48.762208   49443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-340656"
	I0213 23:13:48.762334   49443 addons.go:69] Setting metrics-server=true in profile "embed-certs-340656"
	I0213 23:13:48.762359   49443 addons.go:234] Setting addon metrics-server=true in "embed-certs-340656"
	W0213 23:13:48.762368   49443 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:48.762418   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762605   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762642   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762770   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762812   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.782845   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0213 23:13:48.782988   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0213 23:13:48.782993   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0213 23:13:48.783453   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783578   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783583   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.784018   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784038   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784160   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784177   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784197   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784211   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784431   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784636   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.784704   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784781   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.785241   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785264   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.785910   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785952   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.795703   49443 addons.go:234] Setting addon default-storageclass=true in "embed-certs-340656"
	W0213 23:13:48.795803   49443 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:48.795847   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.796295   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.796352   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.805562   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0213 23:13:48.806234   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.815444   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0213 23:13:48.815451   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.815558   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.817565   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.817770   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.818164   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.818796   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.818815   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.819308   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0213 23:13:48.819537   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.819661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.819723   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.821798   49443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:48.820119   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.821685   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.823106   49443 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:48.823122   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:48.823142   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.824803   49443 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:48.826431   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.826467   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:48.826487   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:48.826507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.826393   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.826536   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.827054   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.827129   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.827155   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.827617   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.828067   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.828089   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.828119   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.828335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.828539   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.830417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.831572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.831604   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.832609   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.832827   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.832999   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.833165   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.851188   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0213 23:13:48.851868   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.852446   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.852482   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.852913   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.853134   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.855360   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.855766   49443 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:48.855792   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:48.855810   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.859610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.859877   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.859915   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.860263   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.860507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.860699   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.860854   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:49.015561   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:49.019336   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:49.047556   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:49.047593   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:49.083994   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:49.109749   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:49.109778   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:49.196430   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.196459   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:49.297603   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.306053   49443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-340656" context rescaled to 1 replicas
	I0213 23:13:49.306112   49443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:49.307559   49443 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:49.308883   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:51.125630   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109969214s)
	I0213 23:13:51.125663   49443 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:51.492579   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473198087s)
	I0213 23:13:51.492655   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492672   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492587   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.408541587s)
	I0213 23:13:51.492794   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492820   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493027   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493041   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493052   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493061   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493362   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493392   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493401   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493458   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493492   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493501   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493511   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493520   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493768   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493791   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.550911   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.550944   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.551267   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.551319   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.728993   49443 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.420033663s)
	I0213 23:13:51.729078   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.431431547s)
	I0213 23:13:51.729114   49443 node_ready.go:35] waiting up to 6m0s for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.729135   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729163   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729446   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729462   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729473   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729483   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729770   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.729803   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729813   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729823   49443 addons.go:470] Verifying addon metrics-server=true in "embed-certs-340656"
	I0213 23:13:51.732785   49443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:47.812862   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:49.820823   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:52.318873   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:51.733634   49443 addons.go:505] enable addons completed in 2.972313278s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:51.741252   49443 node_ready.go:49] node "embed-certs-340656" has status "Ready":"True"
	I0213 23:13:51.741279   49443 node_ready.go:38] duration metric: took 12.133263ms waiting for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.741290   49443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:51.749409   49443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766298   49443 pod_ready.go:92] pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.766331   49443 pod_ready.go:81] duration metric: took 1.01688514s waiting for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766345   49443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777697   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.777725   49443 pod_ready.go:81] duration metric: took 11.371663ms waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777738   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789006   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.789030   49443 pod_ready.go:81] duration metric: took 11.286651ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789040   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798798   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.798820   49443 pod_ready.go:81] duration metric: took 9.773358ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798829   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807522   49443 pod_ready.go:92] pod "kube-proxy-4vgt5" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:53.807555   49443 pod_ready.go:81] duration metric: took 1.00871819s waiting for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807569   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133771   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:54.133808   49443 pod_ready.go:81] duration metric: took 326.228368ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133819   49443 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:55.947176   49715 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502842 seconds
	I0213 23:13:55.947340   49715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:55.968064   49715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:56.503592   49715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:56.503798   49715 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-083863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:57.020246   49715 kubeadm.go:322] [bootstrap-token] Using token: 1sfxye.gyrkuj525fbtgg0g
	I0213 23:13:57.021591   49715 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:57.021724   49715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:57.028718   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:57.038574   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:57.046578   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:57.051622   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:57.065769   49715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:57.091404   49715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:57.330768   49715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:57.436406   49715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:57.436445   49715 kubeadm.go:322] 
	I0213 23:13:57.436542   49715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:57.436556   49715 kubeadm.go:322] 
	I0213 23:13:57.436650   49715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:57.436681   49715 kubeadm.go:322] 
	I0213 23:13:57.436729   49715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:57.436813   49715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:57.436887   49715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:57.436898   49715 kubeadm.go:322] 
	I0213 23:13:57.436989   49715 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:57.437002   49715 kubeadm.go:322] 
	I0213 23:13:57.437067   49715 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:57.437078   49715 kubeadm.go:322] 
	I0213 23:13:57.437137   49715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:57.437227   49715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:57.437344   49715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:57.437365   49715 kubeadm.go:322] 
	I0213 23:13:57.437463   49715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:57.437561   49715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:57.437577   49715 kubeadm.go:322] 
	I0213 23:13:57.437713   49715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.437878   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:57.437915   49715 kubeadm.go:322] 	--control-plane 
	I0213 23:13:57.437925   49715 kubeadm.go:322] 
	I0213 23:13:57.438021   49715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:57.438032   49715 kubeadm.go:322] 
	I0213 23:13:57.438140   49715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.438284   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:57.438602   49715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:57.438886   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:13:57.438904   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:57.440968   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:57.442459   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:57.466652   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:57.538217   49715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:57.538279   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:57.538289   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=default-k8s-diff-port-083863 minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:54.320129   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.812983   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.141892   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:58.143201   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:57.914767   49715 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:57.914957   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.415274   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.915866   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.415351   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.915329   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.415646   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.915129   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.415803   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.915716   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:02.415378   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.815013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:01.312236   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:00.645227   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:03.145517   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:02.915447   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.415367   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.915183   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.416047   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.915850   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.415867   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.915570   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.415580   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.915010   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:07.415431   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.314560   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.817591   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.642499   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.644055   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.916067   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.415001   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.915359   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.415672   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.915997   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:10.105267   49715 kubeadm.go:1088] duration metric: took 12.567044904s to wait for elevateKubeSystemPrivileges.
	I0213 23:14:10.105293   49715 kubeadm.go:406] StartCluster complete in 5m12.839656692s
	I0213 23:14:10.105310   49715 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.105392   49715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:14:10.107335   49715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.107629   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:14:10.107747   49715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:14:10.107821   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:14:10.107841   49715 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107858   49715 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107866   49715 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-083863"
	I0213 23:14:10.107873   49715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-083863"
	W0213 23:14:10.107878   49715 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:14:10.107885   49715 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107905   49715 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.107917   49715 addons.go:243] addon metrics-server should already be in state true
	I0213 23:14:10.107941   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.107961   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.108282   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108352   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108368   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108382   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108392   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108355   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.124618   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0213 23:14:10.124636   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0213 23:14:10.125154   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125261   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125984   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.125990   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.126014   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126029   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126422   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126501   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126604   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.127038   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.127067   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131142   49715 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.131168   49715 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:14:10.131196   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.131628   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.131661   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131866   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0213 23:14:10.132342   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.133024   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.133044   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.133539   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.134069   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.134119   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.145244   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0213 23:14:10.145674   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.146213   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.146233   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.146642   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.146845   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.148779   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.151227   49715 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:14:10.152983   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:14:10.153004   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:14:10.150602   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0213 23:14:10.153029   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.154229   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.154857   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.154876   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.155560   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.156429   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.156476   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.156757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.157450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157680   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.157898   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.158068   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.158211   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.159437   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0213 23:14:10.159780   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.160316   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.160328   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.160712   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.160874   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.163133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.166002   49715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:14:10.168221   49715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.168239   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:14:10.168259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.172119   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172539   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.172562   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172800   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.173447   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.173609   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.173769   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.175322   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0213 23:14:10.175719   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.176212   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.176223   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.176556   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.176727   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.178938   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.179149   49715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.179163   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:14:10.179174   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.182253   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.182739   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.182773   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.183106   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.183259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.183425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.183534   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.327834   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:14:10.327857   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:14:10.362507   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.405623   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:14:10.405655   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:14:10.413284   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.427964   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:14:10.459317   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.459343   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:14:10.552860   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.687588   49715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-083863" context rescaled to 1 replicas
	I0213 23:14:10.687640   49715 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:14:10.689888   49715 out.go:177] * Verifying Kubernetes components...
	I0213 23:14:10.691656   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:14:08.312251   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:10.313161   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.313239   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.671905   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.309368382s)
	I0213 23:14:12.671963   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.258642736s)
	I0213 23:14:12.671974   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.671999   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672008   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244007691s)
	I0213 23:14:12.672048   49715 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 23:14:12.672013   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672319   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672358   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672414   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672428   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672440   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672391   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672502   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672511   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672522   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672672   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672713   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672825   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672842   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672845   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.718598   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.718635   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.718899   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.718948   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.718957   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992151   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.439242656s)
	I0213 23:14:12.992169   49715 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.300483548s)
	I0213 23:14:12.992204   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992208   49715 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:12.992219   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.992608   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.992650   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.992674   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992694   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992706   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.993012   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.993033   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.993082   49715 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-083863"
	I0213 23:14:12.994959   49715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:14:10.144369   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.642284   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.996304   49715 addons.go:505] enable addons completed in 2.888556474s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:14:13.017331   49715 node_ready.go:49] node "default-k8s-diff-port-083863" has status "Ready":"True"
	I0213 23:14:13.017356   49715 node_ready.go:38] duration metric: took 25.135832ms waiting for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:13.017369   49715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:14:13.040090   49715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047064   49715 pod_ready.go:92] pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.047105   49715 pod_ready.go:81] duration metric: took 2.006967952s waiting for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047119   49715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052773   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.052793   49715 pod_ready.go:81] duration metric: took 5.668033ms waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052801   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.057989   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.058012   49715 pod_ready.go:81] duration metric: took 5.204253ms waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.058024   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063408   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.063426   49715 pod_ready.go:81] duration metric: took 5.394681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063434   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068502   49715 pod_ready.go:92] pod "kube-proxy-kvz2b" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.068523   49715 pod_ready.go:81] duration metric: took 5.082168ms waiting for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068534   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445109   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.445132   49715 pod_ready.go:81] duration metric: took 376.590631ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445142   49715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:17.453588   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:14.816746   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.313290   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:15.141901   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.641098   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.453805   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.954116   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.812763   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.814338   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.641389   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.641735   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.142168   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.455003   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.952168   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.312468   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.813420   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.641722   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.141082   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:28.954054   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:30.954647   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.311343   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.312249   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.143011   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.642102   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.452218   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.453522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.457001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.314313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.812309   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:36.143532   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:38.640894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:39.955206   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.456339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.813776   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.314111   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.642572   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:43.141919   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:44.955150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.454324   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.813470   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.313382   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.143485   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.641760   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.954167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.453822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.814576   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:50.312600   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.313062   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.642698   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.141500   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.141646   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.454979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.953279   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.812403   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.813413   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.142104   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:58.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.453692   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.952522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.313705   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.813002   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:00.642441   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:02.644754   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.954032   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.453202   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.813780   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.312152   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:04.645545   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:07.142188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.454411   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:10.953929   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.813133   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.315282   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:09.641331   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.644066   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:14.141197   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.452937   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:15.453227   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:17.455142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.814488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.312013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.142256   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:19.956449   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.454447   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.313100   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.315124   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.642516   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:23.141725   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.955277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:26.956469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.813277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.813332   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.313503   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:25.148206   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.642527   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.453659   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:31.953193   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.812921   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.311859   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.642812   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.141177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.141385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.452179   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.454250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.312263   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.812360   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.642681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.142639   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:38.952639   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:40.953841   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.311603   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.312975   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.640004   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.641689   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:42.954046   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.453175   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.812207   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:46.313761   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.642354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.141466   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:47.953013   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.455958   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.813689   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:51.312695   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.144359   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.145852   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.952203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.960421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.455215   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:53.312858   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:55.313197   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.313493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.642775   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.142159   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.143780   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.953718   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.954907   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.813086   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:02.313743   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.640609   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:03.641712   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.453269   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:06.454001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.813366   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.313460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:05.642520   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.644309   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:08.454568   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.953538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:09.315454   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:11.814145   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.142385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.644175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.953619   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.452015   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.455884   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:14.311599   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:16.312822   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.143506   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.643647   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:19.952742   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:21.953464   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:18.314298   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.812863   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.142175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:22.641953   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.953599   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.953715   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.312368   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.813170   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:24.642939   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:27.143008   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.452587   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.454360   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.314038   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.812058   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:29.642029   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.141959   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.142628   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.955547   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:35.453428   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.456558   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.813040   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.813607   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.314673   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:36.143091   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:38.147685   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.953073   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:42.452724   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.811843   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:41.811877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:40.645177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.140828   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:44.453277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.453393   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.813703   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.312231   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:45.141859   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:47.142843   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.453508   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.456357   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.312293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.812918   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:49.641676   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.142518   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.951784   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.954108   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.455497   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:53.312477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:55.313195   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.642918   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.141241   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.141855   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.954832   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.455675   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.811554   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.813709   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.313752   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:01.142778   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:03.143196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.953816   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.953967   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.812917   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.814681   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:05.644404   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:07.644824   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.455392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.953935   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.312828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.811876   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:10.141985   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:12.642984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.453572   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.454161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.314828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.813786   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:15.143013   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:17.143864   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.144089   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:18.952608   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:20.952810   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.312837   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.316700   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.641354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:24.142975   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:22.953607   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.453091   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.454501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:23.811674   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.814225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:26.640796   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:28.642684   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:29.952519   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.453137   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.816563   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.314052   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.642932   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:33.142380   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.456778   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.459583   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.812724   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.812895   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.813814   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:35.641888   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.144690   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.952822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.956268   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.821433   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:41.313306   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.641240   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:42.641667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.453378   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.953398   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.313457   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812519   49120 pod_ready.go:81] duration metric: took 4m0.007851911s waiting for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:45.812528   49120 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:45.812534   49120 pod_ready.go:38] duration metric: took 4m2.781943239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:45.812548   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:45.812574   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:45.812640   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:45.881239   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:45.881267   49120 cri.go:89] found id: ""
	I0213 23:17:45.881277   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:45.881327   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.886446   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:45.886531   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:45.926920   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:45.926947   49120 cri.go:89] found id: ""
	I0213 23:17:45.926955   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:45.927000   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.931500   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:45.931577   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:45.979081   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:45.979109   49120 cri.go:89] found id: ""
	I0213 23:17:45.979119   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:45.979174   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.984481   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:45.984539   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:46.035365   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.035385   49120 cri.go:89] found id: ""
	I0213 23:17:46.035392   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:46.035438   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.039634   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:46.039695   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:46.087404   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:46.087429   49120 cri.go:89] found id: ""
	I0213 23:17:46.087436   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:46.087490   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.091828   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:46.091889   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:46.133625   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:46.133651   49120 cri.go:89] found id: ""
	I0213 23:17:46.133658   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:46.133710   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.138378   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:46.138456   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:46.181018   49120 cri.go:89] found id: ""
	I0213 23:17:46.181048   49120 logs.go:276] 0 containers: []
	W0213 23:17:46.181058   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:46.181065   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:46.181141   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:46.221347   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.221374   49120 cri.go:89] found id: ""
	I0213 23:17:46.221385   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:46.221448   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.226298   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:46.226331   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:46.268881   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:46.268915   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.325183   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:46.325225   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.372600   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:46.372637   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:46.791381   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:46.791438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:46.861239   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:46.861431   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:46.884969   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:46.885009   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:46.909324   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:46.909352   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:46.966664   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:46.966698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:47.030276   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:47.030321   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:47.081480   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:47.081516   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:47.238201   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:47.238238   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:47.285995   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:47.286033   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:47.332459   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332486   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:47.332566   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:47.332580   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:47.332596   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:47.332616   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332622   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:44.643384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.141032   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.953650   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:50.453421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.453501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:49.641373   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.142827   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:54.141398   49443 pod_ready.go:81] duration metric: took 4m0.007567399s waiting for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:54.141420   49443 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:54.141428   49443 pod_ready.go:38] duration metric: took 4m2.400127673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:54.141441   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:54.141464   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:54.141506   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:54.203295   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:54.203319   49443 cri.go:89] found id: ""
	I0213 23:17:54.203329   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:54.203387   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.208671   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:54.208748   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:54.254150   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:54.254183   49443 cri.go:89] found id: ""
	I0213 23:17:54.254193   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:54.254259   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.259090   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:54.259178   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:54.309365   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:54.309385   49443 cri.go:89] found id: ""
	I0213 23:17:54.309392   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:54.309436   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.315937   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:54.316014   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:54.363796   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.363855   49443 cri.go:89] found id: ""
	I0213 23:17:54.363866   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:54.363926   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.368767   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:54.368842   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:54.417590   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:54.417620   49443 cri.go:89] found id: ""
	I0213 23:17:54.417637   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:54.417696   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.422980   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:54.423053   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:54.468990   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.469019   49443 cri.go:89] found id: ""
	I0213 23:17:54.469029   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:54.469094   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.473989   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:54.474073   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:54.524124   49443 cri.go:89] found id: ""
	I0213 23:17:54.524154   49443 logs.go:276] 0 containers: []
	W0213 23:17:54.524164   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:54.524172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:54.524239   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.953845   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.459517   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.333824   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:57.351216   49120 api_server.go:72] duration metric: took 4m15.50672707s to wait for apiserver process to appear ...
	I0213 23:17:57.351245   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:57.351281   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:57.351340   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:57.405928   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:57.405956   49120 cri.go:89] found id: ""
	I0213 23:17:57.405963   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:57.406007   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.410541   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:57.410619   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:57.456843   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:57.456871   49120 cri.go:89] found id: ""
	I0213 23:17:57.456881   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:57.456940   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.461801   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:57.461852   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:57.504653   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.504690   49120 cri.go:89] found id: ""
	I0213 23:17:57.504702   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:57.504762   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.509177   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:57.509250   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:57.556672   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:57.556696   49120 cri.go:89] found id: ""
	I0213 23:17:57.556704   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:57.556747   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.561343   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:57.561399   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:57.606959   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:57.606994   49120 cri.go:89] found id: ""
	I0213 23:17:57.607005   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:57.607068   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.611356   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:57.611440   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:57.655205   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:57.655230   49120 cri.go:89] found id: ""
	I0213 23:17:57.655238   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:57.655284   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.659762   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:57.659850   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:57.699989   49120 cri.go:89] found id: ""
	I0213 23:17:57.700012   49120 logs.go:276] 0 containers: []
	W0213 23:17:57.700019   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:57.700028   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:57.700075   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.562654   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.562674   49443 cri.go:89] found id: ""
	I0213 23:17:54.562682   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:54.562745   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.567182   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:54.567209   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:54.666809   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:54.666847   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:54.818292   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:54.818324   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.878074   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:54.878108   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.938472   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:54.938509   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.985201   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:54.985235   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:54.999987   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:55.000016   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:55.058536   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:55.058573   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:55.108130   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:55.108172   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:55.154299   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:55.154327   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:55.205554   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:55.205583   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:55.615944   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:55.615987   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.179069   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:58.194968   49443 api_server.go:72] duration metric: took 4m8.888826635s to wait for apiserver process to appear ...
	I0213 23:17:58.194992   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:58.195020   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:58.195067   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:58.245997   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.246029   49443 cri.go:89] found id: ""
	I0213 23:17:58.246038   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:58.246103   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.251486   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:58.251566   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:58.299878   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:58.299909   49443 cri.go:89] found id: ""
	I0213 23:17:58.299919   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:58.299977   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.305075   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:58.305139   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:58.352587   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:58.352617   49443 cri.go:89] found id: ""
	I0213 23:17:58.352628   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:58.352688   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.357493   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:58.357576   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:58.412181   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.412203   49443 cri.go:89] found id: ""
	I0213 23:17:58.412211   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:58.412265   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.418852   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:58.418931   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:58.470881   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.470907   49443 cri.go:89] found id: ""
	I0213 23:17:58.470916   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:58.470970   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.476768   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:58.476851   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:58.548272   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:58.548293   49443 cri.go:89] found id: ""
	I0213 23:17:58.548301   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:58.548357   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.553380   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:58.553452   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:58.599623   49443 cri.go:89] found id: ""
	I0213 23:17:58.599652   49443 logs.go:276] 0 containers: []
	W0213 23:17:58.599663   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:58.599669   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:58.599725   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:58.647872   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.647896   49443 cri.go:89] found id: ""
	I0213 23:17:58.647906   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:58.647966   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.653015   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:58.653041   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.707958   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:58.708000   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.759975   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:58.760015   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.814801   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:58.814833   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.853782   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.853814   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:59.217806   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:59.217854   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:59.278255   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:59.278294   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:59.385496   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:59.385537   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:59.953729   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:02.454016   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.740739   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:57.740774   49120 cri.go:89] found id: ""
	I0213 23:17:57.740785   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:57.740839   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.745140   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:57.745163   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:57.758556   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:57.758604   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:57.900468   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:57.900507   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.945665   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:57.945693   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:58.003484   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:58.003521   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:58.048797   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:58.048826   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.096309   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:58.096347   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:58.173795   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.173990   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.196277   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:58.196306   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:58.266087   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:58.266129   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:58.325638   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:58.325676   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:58.372711   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:58.372752   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:58.444057   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.444097   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:58.830470   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830511   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:58.830572   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:58.830591   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.830600   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.830610   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830618   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:59.544056   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:59.544517   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:59.607033   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:59.607067   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:59.654534   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:59.654584   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:59.719274   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:59.719309   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:02.234489   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:18:02.240412   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:18:02.241675   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:02.241699   49443 api_server.go:131] duration metric: took 4.046700263s to wait for apiserver health ...
	I0213 23:18:02.241710   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:02.241735   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:02.241796   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:02.289133   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:02.289158   49443 cri.go:89] found id: ""
	I0213 23:18:02.289166   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:18:02.289212   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.295450   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:02.295527   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:02.342262   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:02.342285   49443 cri.go:89] found id: ""
	I0213 23:18:02.342292   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:18:02.342337   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.346810   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:02.346874   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:02.385638   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:02.385665   49443 cri.go:89] found id: ""
	I0213 23:18:02.385673   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:18:02.385725   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.389834   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:02.389920   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:02.435078   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:02.435110   49443 cri.go:89] found id: ""
	I0213 23:18:02.435121   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:18:02.435184   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.440237   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:02.440297   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:02.483869   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.483891   49443 cri.go:89] found id: ""
	I0213 23:18:02.483899   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:18:02.483942   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.490454   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:02.490532   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:02.540971   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:02.541000   49443 cri.go:89] found id: ""
	I0213 23:18:02.541010   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:18:02.541069   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.545818   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:02.545906   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:02.593132   49443 cri.go:89] found id: ""
	I0213 23:18:02.593159   49443 logs.go:276] 0 containers: []
	W0213 23:18:02.593166   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:02.593172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:02.593222   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:02.634979   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.635015   49443 cri.go:89] found id: ""
	I0213 23:18:02.635028   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:18:02.635089   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.640246   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:18:02.640274   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.681426   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:18:02.681458   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.721033   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:02.721062   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:03.049340   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:03.049385   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:18:03.154378   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:18:03.154417   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:03.215045   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:18:03.215081   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:03.260291   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:18:03.260320   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:03.323526   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:18:03.323565   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:03.378686   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:03.378731   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:03.406717   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:03.406742   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:03.547999   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:18:03.548035   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:03.593226   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:18:03.593255   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:06.160914   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:06.160954   49443 system_pods.go:61] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.160963   49443 system_pods.go:61] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.160970   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.160977   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.160996   49443 system_pods.go:61] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.161008   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.161018   49443 system_pods.go:61] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.161025   49443 system_pods.go:61] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.161035   49443 system_pods.go:74] duration metric: took 3.919318115s to wait for pod list to return data ...
	I0213 23:18:06.161046   49443 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:06.165231   49443 default_sa.go:45] found service account: "default"
	I0213 23:18:06.165262   49443 default_sa.go:55] duration metric: took 4.207834ms for default service account to be created ...
	I0213 23:18:06.165271   49443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:06.172453   49443 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:06.172488   49443 system_pods.go:89] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.172494   49443 system_pods.go:89] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.172499   49443 system_pods.go:89] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.172503   49443 system_pods.go:89] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.172507   49443 system_pods.go:89] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.172512   49443 system_pods.go:89] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.172517   49443 system_pods.go:89] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.172522   49443 system_pods.go:89] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.172531   49443 system_pods.go:126] duration metric: took 7.254871ms to wait for k8s-apps to be running ...
	I0213 23:18:06.172541   49443 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:06.172598   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:06.193026   49443 system_svc.go:56] duration metric: took 20.479072ms WaitForService to wait for kubelet.
	I0213 23:18:06.193051   49443 kubeadm.go:581] duration metric: took 4m16.886913912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:06.193072   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:06.196910   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:06.196940   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:06.196951   49443 node_conditions.go:105] duration metric: took 3.874223ms to run NodePressure ...
	I0213 23:18:06.196962   49443 start.go:228] waiting for startup goroutines ...
	I0213 23:18:06.196968   49443 start.go:233] waiting for cluster config update ...
	I0213 23:18:06.196977   49443 start.go:242] writing updated cluster config ...
	I0213 23:18:06.197233   49443 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:06.248295   49443 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:06.250392   49443 out.go:177] * Done! kubectl is now configured to use "embed-certs-340656" cluster and "default" namespace by default
	I0213 23:18:04.455358   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:06.953191   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.954115   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:10.954853   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.832437   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:18:08.838687   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:18:08.839999   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:18:08.840021   49120 api_server.go:131] duration metric: took 11.488768389s to wait for apiserver health ...
	I0213 23:18:08.840031   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:08.840058   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:08.840122   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:08.891532   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:08.891559   49120 cri.go:89] found id: ""
	I0213 23:18:08.891567   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:18:08.891618   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.896712   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:08.896802   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:08.943555   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:08.943584   49120 cri.go:89] found id: ""
	I0213 23:18:08.943593   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:18:08.943654   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.948658   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:08.948730   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:08.995867   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:08.995896   49120 cri.go:89] found id: ""
	I0213 23:18:08.995905   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:18:08.995970   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.000810   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:09.000883   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:09.046606   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.046636   49120 cri.go:89] found id: ""
	I0213 23:18:09.046646   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:18:09.046706   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.050924   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:09.050986   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:09.097414   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.097445   49120 cri.go:89] found id: ""
	I0213 23:18:09.097456   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:18:09.097525   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.102101   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:09.102177   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:09.164244   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.164267   49120 cri.go:89] found id: ""
	I0213 23:18:09.164274   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:18:09.164323   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.169164   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:09.169238   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:09.217068   49120 cri.go:89] found id: ""
	I0213 23:18:09.217094   49120 logs.go:276] 0 containers: []
	W0213 23:18:09.217101   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:09.217106   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:09.217174   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:09.256986   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.257017   49120 cri.go:89] found id: ""
	I0213 23:18:09.257028   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:18:09.257088   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.261602   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:18:09.261625   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.314910   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:18:09.314957   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.361576   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:18:09.361609   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.433243   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:18:09.433281   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.485648   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:09.485698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:09.634091   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:18:09.634127   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:09.681649   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:18:09.681689   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:09.729410   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:09.729449   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:10.100058   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:18:10.100104   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:10.156178   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:10.156209   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:10.229188   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.229358   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.251947   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:10.251987   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:10.268224   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:18:10.268251   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:10.319580   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319608   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:10.319651   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:18:10.319663   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.319673   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.319685   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319696   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:13.453597   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:15.445609   49715 pod_ready.go:81] duration metric: took 4m0.000451749s waiting for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	E0213 23:18:15.445643   49715 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:18:15.445653   49715 pod_ready.go:38] duration metric: took 4m2.428270702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:18:15.445670   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:18:15.445716   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:15.445773   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:15.501757   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:15.501791   49715 cri.go:89] found id: ""
	I0213 23:18:15.501802   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:15.501863   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.507658   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:15.507738   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:15.552164   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:15.552197   49715 cri.go:89] found id: ""
	I0213 23:18:15.552204   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:15.552257   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.557704   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:15.557764   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:15.606147   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:15.606168   49715 cri.go:89] found id: ""
	I0213 23:18:15.606175   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:15.606231   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.610863   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:15.610939   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:15.655298   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:15.655320   49715 cri.go:89] found id: ""
	I0213 23:18:15.655329   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:15.655387   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.660000   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:15.660062   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:15.699700   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:15.699735   49715 cri.go:89] found id: ""
	I0213 23:18:15.699745   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:15.699815   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.704535   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:15.704614   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:15.746999   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:15.747028   49715 cri.go:89] found id: ""
	I0213 23:18:15.747038   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:15.747091   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.752065   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:15.752137   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:15.793372   49715 cri.go:89] found id: ""
	I0213 23:18:15.793404   49715 logs.go:276] 0 containers: []
	W0213 23:18:15.793415   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:15.793422   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:15.793487   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:15.839630   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:15.839660   49715 cri.go:89] found id: ""
	I0213 23:18:15.839668   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:15.839723   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.844199   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:15.844225   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:15.904450   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:15.904479   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:15.925777   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:15.925805   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:16.079602   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:16.079634   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:16.121369   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:16.121400   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:16.174404   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:16.174440   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:16.216286   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:16.216321   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:16.629527   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:16.629564   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:16.708003   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.708235   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.729748   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:16.729784   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:16.784398   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:16.784432   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:16.829885   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:16.829923   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:16.872036   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:16.872066   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:16.937327   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937359   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:16.937411   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:16.937421   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.937431   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.937441   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937449   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:20.329462   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:20.329500   49120 system_pods.go:61] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.329508   49120 system_pods.go:61] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.329515   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.329521   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.329527   49120 system_pods.go:61] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.329533   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.329543   49120 system_pods.go:61] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.329550   49120 system_pods.go:61] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.329560   49120 system_pods.go:74] duration metric: took 11.489522059s to wait for pod list to return data ...
	I0213 23:18:20.329569   49120 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:20.332784   49120 default_sa.go:45] found service account: "default"
	I0213 23:18:20.332809   49120 default_sa.go:55] duration metric: took 3.233136ms for default service account to be created ...
	I0213 23:18:20.332817   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:20.339002   49120 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:20.339033   49120 system_pods.go:89] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.339042   49120 system_pods.go:89] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.339049   49120 system_pods.go:89] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.339056   49120 system_pods.go:89] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.339063   49120 system_pods.go:89] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.339070   49120 system_pods.go:89] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.339084   49120 system_pods.go:89] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.339093   49120 system_pods.go:89] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.339116   49120 system_pods.go:126] duration metric: took 6.292649ms to wait for k8s-apps to be running ...
	I0213 23:18:20.339125   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:20.339183   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:20.354459   49120 system_svc.go:56] duration metric: took 15.325743ms WaitForService to wait for kubelet.
	I0213 23:18:20.354488   49120 kubeadm.go:581] duration metric: took 4m38.510005999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:20.354505   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:20.358160   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:20.358186   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:20.358195   49120 node_conditions.go:105] duration metric: took 3.685402ms to run NodePressure ...
	I0213 23:18:20.358205   49120 start.go:228] waiting for startup goroutines ...
	I0213 23:18:20.358211   49120 start.go:233] waiting for cluster config update ...
	I0213 23:18:20.358220   49120 start.go:242] writing updated cluster config ...
	I0213 23:18:20.358527   49120 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:20.409811   49120 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 23:18:20.412251   49120 out.go:177] * Done! kubectl is now configured to use "no-preload-778731" cluster and "default" namespace by default
	I0213 23:18:26.939087   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:18:26.956231   49715 api_server.go:72] duration metric: took 4m16.268553955s to wait for apiserver process to appear ...
	I0213 23:18:26.956259   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:18:26.956317   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:26.956382   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:27.006428   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.006455   49715 cri.go:89] found id: ""
	I0213 23:18:27.006465   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:27.006527   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.011468   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:27.011542   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:27.054309   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.054334   49715 cri.go:89] found id: ""
	I0213 23:18:27.054344   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:27.054393   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.058925   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:27.058979   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:27.101942   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.101971   49715 cri.go:89] found id: ""
	I0213 23:18:27.101981   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:27.102041   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.107540   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:27.107609   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:27.152126   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.152150   49715 cri.go:89] found id: ""
	I0213 23:18:27.152157   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:27.152203   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.156537   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:27.156608   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:27.202931   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:27.202952   49715 cri.go:89] found id: ""
	I0213 23:18:27.202959   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:27.203006   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.209339   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:27.209405   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:27.250771   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:27.250814   49715 cri.go:89] found id: ""
	I0213 23:18:27.250828   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:27.250898   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.255547   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:27.255621   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:27.297645   49715 cri.go:89] found id: ""
	I0213 23:18:27.297679   49715 logs.go:276] 0 containers: []
	W0213 23:18:27.297689   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:27.297697   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:27.297765   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:27.340690   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.340719   49715 cri.go:89] found id: ""
	I0213 23:18:27.340728   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:27.340786   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.345308   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:27.345338   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:27.481620   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:27.481653   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.541421   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:27.541456   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.594527   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:27.594559   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.657323   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:27.657358   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.710198   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:27.710234   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.750419   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:27.750451   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:28.148333   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:28.148374   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:28.162927   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:28.162959   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:28.214802   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:28.214835   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:28.264035   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:28.264061   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:28.328849   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:28.328888   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:28.408683   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.408859   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429691   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429721   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:28.429772   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:28.429780   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.429787   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429793   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429798   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:38.431065   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:18:38.438496   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:18:38.440109   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:38.440131   49715 api_server.go:131] duration metric: took 11.483865303s to wait for apiserver health ...
	I0213 23:18:38.440139   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:38.440163   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:38.440218   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:38.485767   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:38.485791   49715 cri.go:89] found id: ""
	I0213 23:18:38.485798   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:38.485847   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.490804   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:38.490876   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:38.540174   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:38.540196   49715 cri.go:89] found id: ""
	I0213 23:18:38.540203   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:38.540247   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.545816   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:38.545904   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:38.593443   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:38.593466   49715 cri.go:89] found id: ""
	I0213 23:18:38.593474   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:38.593531   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.598567   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:38.598642   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:38.646508   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:38.646539   49715 cri.go:89] found id: ""
	I0213 23:18:38.646549   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:38.646605   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.651425   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:38.651489   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:38.695133   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:38.695157   49715 cri.go:89] found id: ""
	I0213 23:18:38.695166   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:38.695226   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.700446   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:38.700504   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:38.748214   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.748251   49715 cri.go:89] found id: ""
	I0213 23:18:38.748261   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:38.748319   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.753466   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:38.753532   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:38.796480   49715 cri.go:89] found id: ""
	I0213 23:18:38.796505   49715 logs.go:276] 0 containers: []
	W0213 23:18:38.796514   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:38.796521   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:38.796597   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:38.838145   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.838189   49715 cri.go:89] found id: ""
	I0213 23:18:38.838199   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:38.838259   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.844252   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:38.844279   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.919402   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:38.919442   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.963733   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:38.963767   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:39.013301   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:39.013336   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:39.142161   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:39.142192   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:39.199423   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:39.199455   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:39.245639   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:39.245669   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:39.290916   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:39.290954   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:39.343373   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:39.343405   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:39.700393   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:39.700441   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:39.777386   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.777564   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.800035   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:39.800087   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:39.817941   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:39.817972   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:39.870635   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870675   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:39.870733   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:39.870744   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.870749   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.870756   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870764   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:49.878184   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:49.878220   49715 system_pods.go:61] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.878229   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.878237   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.878244   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.878250   49715 system_pods.go:61] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.878256   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.878268   49715 system_pods.go:61] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.878276   49715 system_pods.go:61] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.878284   49715 system_pods.go:74] duration metric: took 11.438139039s to wait for pod list to return data ...
	I0213 23:18:49.878294   49715 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:49.881702   49715 default_sa.go:45] found service account: "default"
	I0213 23:18:49.881730   49715 default_sa.go:55] duration metric: took 3.42943ms for default service account to be created ...
	I0213 23:18:49.881741   49715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:49.888356   49715 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:49.888380   49715 system_pods.go:89] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.888385   49715 system_pods.go:89] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.888392   49715 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.888397   49715 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.888403   49715 system_pods.go:89] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.888409   49715 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.888422   49715 system_pods.go:89] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.888434   49715 system_pods.go:89] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.888446   49715 system_pods.go:126] duration metric: took 6.698139ms to wait for k8s-apps to be running ...
	I0213 23:18:49.888456   49715 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:49.888497   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:49.905396   49715 system_svc.go:56] duration metric: took 16.928016ms WaitForService to wait for kubelet.
	I0213 23:18:49.905427   49715 kubeadm.go:581] duration metric: took 4m39.217754888s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:49.905452   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:49.909261   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:49.909296   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:49.909312   49715 node_conditions.go:105] duration metric: took 3.854435ms to run NodePressure ...
	I0213 23:18:49.909326   49715 start.go:228] waiting for startup goroutines ...
	I0213 23:18:49.909334   49715 start.go:233] waiting for cluster config update ...
	I0213 23:18:49.909347   49715 start.go:242] writing updated cluster config ...
	I0213 23:18:49.909654   49715 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:49.961022   49715 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:49.963033   49715 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-083863" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:08:41 UTC, ends at Tue 2024-02-13 23:27:51 UTC. --
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.770110607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866871770084470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=05587fbd-cace-4377-bed7-36515c7c9024 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.770830385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5ddfba60-f35a-4e7e-a9ce-5f46613f1bc2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.770912432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5ddfba60-f35a-4e7e-a9ce-5f46613f1bc2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.771104264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5ddfba60-f35a-4e7e-a9ce-5f46613f1bc2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.814802428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6f4f9d13-89ce-44af-b838-06b5d85b07e4 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.814934129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6f4f9d13-89ce-44af-b838-06b5d85b07e4 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.817070857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1bf9fed6-2d37-4bc9-9fe2-0f84877bd450 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.817492409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866871817476561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1bf9fed6-2d37-4bc9-9fe2-0f84877bd450 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.818048664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43f98031-1d09-43fa-bf69-a9bd08e22199 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.818123501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43f98031-1d09-43fa-bf69-a9bd08e22199 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.818304673Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43f98031-1d09-43fa-bf69-a9bd08e22199 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.863305564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ccfea4f8-3679-4e57-aa36-3f6f475955ea name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.863359356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ccfea4f8-3679-4e57-aa36-3f6f475955ea name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.864888233Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=57e271e4-3087-4fd9-84eb-8f7dc204204d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.865340210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866871865324727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=57e271e4-3087-4fd9-84eb-8f7dc204204d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.867152262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82c529a6-7587-4c2f-bc70-71df8ee002bb name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.867228975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82c529a6-7587-4c2f-bc70-71df8ee002bb name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.867413763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82c529a6-7587-4c2f-bc70-71df8ee002bb name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.916552568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=48b74d07-be9e-418b-8f07-9c75882ac7fc name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.916651608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=48b74d07-be9e-418b-8f07-9c75882ac7fc name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.917920867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3377b988-e659-4f95-83be-3aad2082a244 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.918431356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866871918416277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3377b988-e659-4f95-83be-3aad2082a244 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.919289229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=882fb006-4843-4e84-8128-33f1ea04d728 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.919369283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=882fb006-4843-4e84-8128-33f1ea04d728 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:51 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:27:51.919612955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=882fb006-4843-4e84-8128-33f1ea04d728 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b77bb1054c124       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   b22d5433d1db1       storage-provisioner
	54c4e3487b37a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   b4980c423eefc       coredns-5dd5756b68-zfscd
	cf87943bc8d36       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   2cfaff2ab3966       kube-proxy-kvz2b
	d21b5c6916454       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   1301d0fa68e32       etcd-default-k8s-diff-port-083863
	090e6a31f6e25       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   16023d4c404e4       kube-controller-manager-default-k8s-diff-port-083863
	5b9dcc8f5592c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   972046b89c6b4       kube-scheduler-default-k8s-diff-port-083863
	fab70becf45b1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   dcde9a0e0d2f5       kube-apiserver-default-k8s-diff-port-083863
	
	
	==> coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36814 - 44383 "HINFO IN 6798519642253464597.6305308384375373136. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.10547697s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-083863
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-083863
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=default-k8s-diff-port-083863
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-083863
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:27:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:24:29 +0000   Tue, 13 Feb 2024 23:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:24:29 +0000   Tue, 13 Feb 2024 23:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:24:29 +0000   Tue, 13 Feb 2024 23:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:24:29 +0000   Tue, 13 Feb 2024 23:14:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    default-k8s-diff-port-083863
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c35be514b0749f1bb646e6d331bddbd
	  System UUID:                2c35be51-4b07-49f1-bb64-6e6d331bddbd
	  Boot ID:                    6517d7fc-ffdb-4ab9-a6ee-ce0bf8e78a15
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zfscd                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-083863                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-083863             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-083863    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-kvz2b                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-083863             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-rkg49                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeReady                13m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-083863 event: Registered Node default-k8s-diff-port-083863 in Controller
	
	
	==> dmesg <==
	[Feb13 23:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069635] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.579001] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.542564] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145047] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.504965] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.355292] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.142249] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.257558] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.132249] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.303438] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[Feb13 23:09] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[ +19.082845] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 23:13] systemd-fstab-generator[3487]: Ignoring "noauto" for root device
	[  +9.785148] systemd-fstab-generator[3810]: Ignoring "noauto" for root device
	[Feb13 23:14] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] <==
	{"level":"info","ts":"2024-02-13T23:13:51.410878Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ac0ce77fb984259c","initial-advertise-peer-urls":["https://192.168.39.3:2380"],"listen-peer-urls":["https://192.168.39.3:2380"],"advertise-client-urls":["https://192.168.39.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-13T23:13:51.410939Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2024-02-13T23:13:51.411052Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.3:2380"}
	{"level":"info","ts":"2024-02-13T23:13:51.411188Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-13T23:13:52.291871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:52.291954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:52.291976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgPreVoteResp from ac0ce77fb984259c at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:52.291992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:52.292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c received MsgVoteResp from ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:52.292012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ac0ce77fb984259c became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:52.292022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ac0ce77fb984259c elected leader ac0ce77fb984259c at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:52.293991Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.295251Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ac0ce77fb984259c","local-member-attributes":"{Name:default-k8s-diff-port-083863 ClientURLs:[https://192.168.39.3:2379]}","request-path":"/0/members/ac0ce77fb984259c/attributes","cluster-id":"1d030e9334923ef1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:13:52.295538Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:52.296053Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.296151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.296175Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.29621Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:52.29622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:52.296228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:52.297235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:13:52.297881Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.3:2379"}
	{"level":"info","ts":"2024-02-13T23:23:52.33263Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717}
	{"level":"info","ts":"2024-02-13T23:23:52.335993Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":717,"took":"2.535604ms","hash":572379158}
	{"level":"info","ts":"2024-02-13T23:23:52.336093Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":572379158,"revision":717,"compact-revision":-1}
	
	
	==> kernel <==
	 23:27:52 up 19 min,  0 users,  load average: 0.15, 0.17, 0.18
	Linux default-k8s-diff-port-083863 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] <==
	I0213 23:23:54.094082       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:23:55.093962       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:23:55.094250       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:23:55.094313       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:23:55.094153       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:23:55.094554       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:23:55.095367       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:24:53.961901       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:24:55.094624       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:55.094779       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:24:55.094818       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:24:55.095874       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:55.095978       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:24:55.096009       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:25:53.961554       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 23:26:53.962587       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:26:55.096014       1 handler_proxy.go:93] no RequestInfo found in the context
	W0213 23:26:55.096170       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:55.096176       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:26:55.096269       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0213 23:26:55.096366       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:26:55.098117       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] <==
	I0213 23:22:09.871116       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:22:39.359009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:22:39.881575       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:09.366128       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:09.891410       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:39.375258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:39.901099       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:09.383413       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:09.910303       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:39.389543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:39.930135       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:25:09.397659       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:09.940428       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0213 23:25:16.602405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="351.519µs"
	I0213 23:25:31.605969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="174.7µs"
	E0213 23:25:39.406174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:39.950842       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:09.412330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:09.960978       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:39.420323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:39.971401       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:09.428329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:09.985003       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:39.434885       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:39.997058       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] <==
	I0213 23:14:13.222665       1 server_others.go:69] "Using iptables proxy"
	I0213 23:14:13.273844       1 node.go:141] Successfully retrieved node IP: 192.168.39.3
	I0213 23:14:13.669997       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 23:14:13.670069       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 23:14:13.685667       1 server_others.go:152] "Using iptables Proxier"
	I0213 23:14:13.685911       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:14:13.686146       1 server.go:846] "Version info" version="v1.28.4"
	I0213 23:14:13.690053       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:14:13.740210       1 config.go:188] "Starting service config controller"
	I0213 23:14:13.741095       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:14:13.741379       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:14:13.741422       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:14:13.746847       1 config.go:315] "Starting node config controller"
	I0213 23:14:13.746987       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:14:13.842162       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:14:13.842265       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:14:13.848059       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] <==
	W0213 23:13:54.119403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:13:54.121508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 23:13:54.121676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:13:54.121906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:13:55.080350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:13:55.080466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:13:55.136102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:13:55.136258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:13:55.204944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:13:55.205096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:13:55.260382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:13:55.260506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:13:55.308678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:13:55.308881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:13:55.347963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:55.348090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:55.386911       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:13:55.387074       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:13:55.413371       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:13:55.413467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 23:13:55.422835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:55.422934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:55.487683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:13:55.487890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0213 23:13:58.204921       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:08:41 UTC, ends at Tue 2024-02-13 23:27:52 UTC. --
	Feb 13 23:24:57 default-k8s-diff-port-083863 kubelet[3817]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:25:01 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:01.599423    3817 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 13 23:25:01 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:01.599472    3817 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 13 23:25:01 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:01.599668    3817 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-85xnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rkg49_kube-system(d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:25:01 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:01.599789    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:25:16 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:16.585410    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:25:31 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:31.586655    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:25:45 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:45.587573    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:25:57 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:57.661284    3817 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:25:57 default-k8s-diff-port-083863 kubelet[3817]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:25:57 default-k8s-diff-port-083863 kubelet[3817]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:25:57 default-k8s-diff-port-083863 kubelet[3817]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:25:59 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:25:59.586442    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:26:13 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:26:13.586940    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:26:24 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:26:24.585765    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:26:39 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:26:39.585966    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:26:51 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:26:51.585537    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:26:57 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:26:57.659841    3817 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:26:57 default-k8s-diff-port-083863 kubelet[3817]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:26:57 default-k8s-diff-port-083863 kubelet[3817]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:26:57 default-k8s-diff-port-083863 kubelet[3817]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:27:05 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:27:05.587041    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:27:19 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:27:19.587620    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:27:31 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:27:31.588028    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:27:45 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:27:45.587127    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	
	
	==> storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] <==
	I0213 23:14:14.264840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:14:14.278961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:14:14.279188       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:14:14.290391       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:14:14.291469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-083863_3a599a68-9463-48a2-bf21-45b5bf078484!
	I0213 23:14:14.293237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a94c010d-1957-412e-af00-3b0a657acaf6", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-083863_3a599a68-9463-48a2-bf21-45b5bf078484 became leader
	I0213 23:14:14.392120       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-083863_3a599a68-9463-48a2-bf21-45b5bf078484!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rkg49
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 describe pod metrics-server-57f55c9bc5-rkg49
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-083863 describe pod metrics-server-57f55c9bc5-rkg49: exit status 1 (71.1168ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rkg49" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-083863 describe pod metrics-server-57f55c9bc5-rkg49: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (511.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0213 23:20:34.186059   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 23:22:03.710314   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 23:23:26.761319   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 23:24:11.137397   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 23:24:21.413566   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 23:27:03.709862   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-245122 -n old-k8s-version-245122
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:27:57.821393079 +0000 UTC m=+5497.436166992
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-245122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-245122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.168µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-245122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-245122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-245122 logs -n 25: (1.847571671s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 23:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-245122        | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:05:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:05:02.640377   49715 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:05:02.640501   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640509   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:05:02.640513   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:05:02.640736   49715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:05:02.641321   49715 out.go:298] Setting JSON to false
	I0213 23:05:02.642273   49715 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6454,"bootTime":1707859049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:05:02.642347   49715 start.go:138] virtualization: kvm guest
	I0213 23:05:02.645098   49715 out.go:177] * [default-k8s-diff-port-083863] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:05:02.646964   49715 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:05:02.646970   49715 notify.go:220] Checking for updates...
	I0213 23:05:02.648511   49715 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:05:02.650105   49715 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:05:02.651715   49715 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:05:02.653359   49715 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:05:02.655095   49715 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:05:02.657048   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:05:02.657426   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.657495   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.672324   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0213 23:05:02.672730   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.673260   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.673290   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.673647   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.673817   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.674096   49715 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:05:02.674432   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:05:02.674472   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:05:02.688915   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45705
	I0213 23:05:02.689349   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:05:02.689790   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:05:02.689816   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:05:02.690223   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:05:02.690421   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:05:02.727324   49715 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 23:05:02.728797   49715 start.go:298] selected driver: kvm2
	I0213 23:05:02.728815   49715 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.728927   49715 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:05:02.729600   49715 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.729674   49715 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:05:02.745692   49715 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:05:02.746106   49715 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:05:02.746172   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:05:02.746187   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:05:02.746199   49715 start_flags.go:321] config:
	{Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-08386
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:05:02.746779   49715 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:05:02.748860   49715 out.go:177] * Starting control plane node default-k8s-diff-port-083863 in cluster default-k8s-diff-port-083863
	I0213 23:05:02.750290   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:05:02.750326   49715 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:05:02.750333   49715 cache.go:56] Caching tarball of preloaded images
	I0213 23:05:02.750421   49715 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:05:02.750463   49715 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:05:02.750576   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:05:02.750762   49715 start.go:365] acquiring machines lock for default-k8s-diff-port-083863: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:05:07.158187   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:10.230150   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:16.310133   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:19.382235   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:25.462139   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:28.534229   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:34.614137   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:37.686165   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:43.766206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:46.838168   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:52.918134   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:05:55.990211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:02.070192   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:05.142167   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:11.222152   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:14.294088   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:20.374194   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:23.446217   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:29.526175   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:32.598147   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:38.678146   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:41.750169   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:47.830142   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:50.902206   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:06:56.982180   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:00.054195   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:06.134182   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:09.206215   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:15.286248   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:18.358211   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:24.438162   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:27.510191   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:33.590177   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:36.662174   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:42.742237   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:45.814203   49036 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.36:22: connect: no route to host
	I0213 23:07:48.818472   49120 start.go:369] acquired machines lock for "no-preload-778731" in 4m31.005837415s
	I0213 23:07:48.818529   49120 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:07:48.818538   49120 fix.go:54] fixHost starting: 
	I0213 23:07:48.818916   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:07:48.818948   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:07:48.833483   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I0213 23:07:48.833925   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:07:48.834425   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:07:48.834452   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:07:48.834778   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:07:48.835000   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:07:48.835155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:07:48.836889   49120 fix.go:102] recreateIfNeeded on no-preload-778731: state=Stopped err=<nil>
	I0213 23:07:48.836930   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	W0213 23:07:48.837148   49120 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:07:48.840033   49120 out.go:177] * Restarting existing kvm2 VM for "no-preload-778731" ...
	I0213 23:07:48.816416   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:07:48.816456   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:07:48.818324   49036 machine.go:91] provisioned docker machine in 4m37.408860809s
	I0213 23:07:48.818361   49036 fix.go:56] fixHost completed within 4m37.431023423s
	I0213 23:07:48.818366   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 4m37.431037395s
	W0213 23:07:48.818389   49036 start.go:694] error starting host: provision: host is not running
	W0213 23:07:48.818527   49036 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0213 23:07:48.818541   49036 start.go:709] Will try again in 5 seconds ...
	I0213 23:07:48.841324   49120 main.go:141] libmachine: (no-preload-778731) Calling .Start
	I0213 23:07:48.841532   49120 main.go:141] libmachine: (no-preload-778731) Ensuring networks are active...
	I0213 23:07:48.842327   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network default is active
	I0213 23:07:48.842678   49120 main.go:141] libmachine: (no-preload-778731) Ensuring network mk-no-preload-778731 is active
	I0213 23:07:48.843032   49120 main.go:141] libmachine: (no-preload-778731) Getting domain xml...
	I0213 23:07:48.843852   49120 main.go:141] libmachine: (no-preload-778731) Creating domain...
	I0213 23:07:50.042665   49120 main.go:141] libmachine: (no-preload-778731) Waiting to get IP...
	I0213 23:07:50.043679   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.044091   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.044189   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.044069   50144 retry.go:31] will retry after 251.949505ms: waiting for machine to come up
	I0213 23:07:50.297817   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.298535   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.298567   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.298493   50144 retry.go:31] will retry after 319.494876ms: waiting for machine to come up
	I0213 23:07:50.620050   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.620443   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.620468   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.620395   50144 retry.go:31] will retry after 308.031117ms: waiting for machine to come up
	I0213 23:07:50.929942   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:50.930361   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:50.930391   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:50.930309   50144 retry.go:31] will retry after 513.800078ms: waiting for machine to come up
	I0213 23:07:51.446223   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:51.446875   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:51.446904   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:51.446813   50144 retry.go:31] will retry after 592.80917ms: waiting for machine to come up
	I0213 23:07:52.042126   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.042542   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.042573   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.042519   50144 retry.go:31] will retry after 688.102963ms: waiting for machine to come up
	I0213 23:07:53.818751   49036 start.go:365] acquiring machines lock for old-k8s-version-245122: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:07:52.732194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:52.732576   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:52.732602   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:52.732538   50144 retry.go:31] will retry after 1.143041451s: waiting for machine to come up
	I0213 23:07:53.877287   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:53.877661   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:53.877687   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:53.877624   50144 retry.go:31] will retry after 918.528315ms: waiting for machine to come up
	I0213 23:07:54.797760   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:54.798287   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:54.798314   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:54.798252   50144 retry.go:31] will retry after 1.679404533s: waiting for machine to come up
	I0213 23:07:56.479283   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:56.479853   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:56.479880   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:56.479785   50144 retry.go:31] will retry after 1.510596076s: waiting for machine to come up
	I0213 23:07:57.992757   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:07:57.993320   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:07:57.993352   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:07:57.993274   50144 retry.go:31] will retry after 2.041602638s: waiting for machine to come up
	I0213 23:08:00.036654   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:00.037130   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:00.037162   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:00.037075   50144 retry.go:31] will retry after 3.403460211s: waiting for machine to come up
	I0213 23:08:03.444689   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:03.445152   49120 main.go:141] libmachine: (no-preload-778731) DBG | unable to find current IP address of domain no-preload-778731 in network mk-no-preload-778731
	I0213 23:08:03.445176   49120 main.go:141] libmachine: (no-preload-778731) DBG | I0213 23:08:03.445088   50144 retry.go:31] will retry after 4.270182412s: waiting for machine to come up
	I0213 23:08:09.107106   49443 start.go:369] acquired machines lock for "embed-certs-340656" in 3m54.456203319s
	I0213 23:08:09.107175   49443 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:09.107194   49443 fix.go:54] fixHost starting: 
	I0213 23:08:09.107647   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:09.107696   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:09.124314   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34613
	I0213 23:08:09.124675   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:09.125131   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:08:09.125153   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:09.125509   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:09.125705   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:09.125898   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:08:09.127641   49443 fix.go:102] recreateIfNeeded on embed-certs-340656: state=Stopped err=<nil>
	I0213 23:08:09.127661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	W0213 23:08:09.127830   49443 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:09.130334   49443 out.go:177] * Restarting existing kvm2 VM for "embed-certs-340656" ...
	I0213 23:08:09.132354   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Start
	I0213 23:08:09.132546   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring networks are active...
	I0213 23:08:09.133391   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network default is active
	I0213 23:08:09.133758   49443 main.go:141] libmachine: (embed-certs-340656) Ensuring network mk-embed-certs-340656 is active
	I0213 23:08:09.134160   49443 main.go:141] libmachine: (embed-certs-340656) Getting domain xml...
	I0213 23:08:09.134954   49443 main.go:141] libmachine: (embed-certs-340656) Creating domain...
	I0213 23:08:07.719971   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.720520   49120 main.go:141] libmachine: (no-preload-778731) Found IP for machine: 192.168.83.31
	I0213 23:08:07.720541   49120 main.go:141] libmachine: (no-preload-778731) Reserving static IP address...
	I0213 23:08:07.720559   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has current primary IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.721043   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.721071   49120 main.go:141] libmachine: (no-preload-778731) DBG | skip adding static IP to network mk-no-preload-778731 - found existing host DHCP lease matching {name: "no-preload-778731", mac: "52:54:00:74:3b:82", ip: "192.168.83.31"}
	I0213 23:08:07.721086   49120 main.go:141] libmachine: (no-preload-778731) Reserved static IP address: 192.168.83.31
	I0213 23:08:07.721105   49120 main.go:141] libmachine: (no-preload-778731) DBG | Getting to WaitForSSH function...
	I0213 23:08:07.721120   49120 main.go:141] libmachine: (no-preload-778731) Waiting for SSH to be available...
	I0213 23:08:07.723769   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724343   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.724370   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.724485   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH client type: external
	I0213 23:08:07.724515   49120 main.go:141] libmachine: (no-preload-778731) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa (-rw-------)
	I0213 23:08:07.724552   49120 main.go:141] libmachine: (no-preload-778731) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:07.724577   49120 main.go:141] libmachine: (no-preload-778731) DBG | About to run SSH command:
	I0213 23:08:07.724605   49120 main.go:141] libmachine: (no-preload-778731) DBG | exit 0
	I0213 23:08:07.823050   49120 main.go:141] libmachine: (no-preload-778731) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:07.823504   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetConfigRaw
	I0213 23:08:07.824155   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:07.826730   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827237   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.827277   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.827608   49120 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/config.json ...
	I0213 23:08:07.827851   49120 machine.go:88] provisioning docker machine ...
	I0213 23:08:07.827877   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:07.828112   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828416   49120 buildroot.go:166] provisioning hostname "no-preload-778731"
	I0213 23:08:07.828464   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:07.828745   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.832015   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832438   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.832477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.832698   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.832929   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833125   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.833288   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.833480   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.833828   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.833845   49120 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-778731 && echo "no-preload-778731" | sudo tee /etc/hostname
	I0213 23:08:07.979041   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-778731
	
	I0213 23:08:07.979079   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:07.982378   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982755   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:07.982783   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:07.982982   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:07.983137   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983346   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:07.983462   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:07.983600   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:07.983946   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:07.983967   49120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-778731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-778731/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-778731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:08.122610   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:08.122641   49120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:08.122657   49120 buildroot.go:174] setting up certificates
	I0213 23:08:08.122666   49120 provision.go:83] configureAuth start
	I0213 23:08:08.122674   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetMachineName
	I0213 23:08:08.122935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:08.125641   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126016   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.126046   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.126205   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.128441   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128736   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.128780   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.128918   49120 provision.go:138] copyHostCerts
	I0213 23:08:08.128984   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:08.128997   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:08.129067   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:08.129198   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:08.129211   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:08.129248   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:08.129321   49120 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:08.129335   49120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:08.129373   49120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:08.129443   49120 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.no-preload-778731 san=[192.168.83.31 192.168.83.31 localhost 127.0.0.1 minikube no-preload-778731]
	I0213 23:08:08.326156   49120 provision.go:172] copyRemoteCerts
	I0213 23:08:08.326234   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:08.326263   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.329373   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.329952   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.329986   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.330257   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.330447   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.330599   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.330737   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.423570   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:08.447689   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:08.472766   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:08:08.496594   49120 provision.go:86] duration metric: configureAuth took 373.917105ms
	I0213 23:08:08.496623   49120 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:08.496815   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:08:08.496899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.499464   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499771   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.499801   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.499935   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.500116   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500284   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.500459   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.500651   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.500962   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.500981   49120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:08.828899   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:08.828935   49120 machine.go:91] provisioned docker machine in 1.001067662s
	I0213 23:08:08.828948   49120 start.go:300] post-start starting for "no-preload-778731" (driver="kvm2")
	I0213 23:08:08.828966   49120 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:08.828987   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:08.829378   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:08.829401   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.831985   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832340   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.832365   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.832498   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.832717   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.832873   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.833022   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:08.930192   49120 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:08.934633   49120 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:08.934660   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:08.934723   49120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:08.934804   49120 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:08.934893   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:08.945400   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:08.973850   49120 start.go:303] post-start completed in 144.888108ms
	I0213 23:08:08.973894   49120 fix.go:56] fixHost completed within 20.155355472s
	I0213 23:08:08.973917   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:08.976477   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976799   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:08.976831   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:08.976990   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:08.977177   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977358   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:08.977513   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:08.977664   49120 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:08.978069   49120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.83.31 22 <nil> <nil>}
	I0213 23:08:08.978082   49120 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:09.106952   49120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865689.053803664
	
	I0213 23:08:09.106977   49120 fix.go:206] guest clock: 1707865689.053803664
	I0213 23:08:09.106984   49120 fix.go:219] Guest: 2024-02-13 23:08:09.053803664 +0000 UTC Remote: 2024-02-13 23:08:08.973898202 +0000 UTC m=+291.312557253 (delta=79.905462ms)
	I0213 23:08:09.107004   49120 fix.go:190] guest clock delta is within tolerance: 79.905462ms
	I0213 23:08:09.107011   49120 start.go:83] releasing machines lock for "no-preload-778731", held for 20.288505954s
	I0213 23:08:09.107046   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.107372   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:09.110226   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110592   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.110623   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.110795   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111368   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111531   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:08:09.111622   49120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:09.111662   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.113712   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.114053   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.114096   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.117964   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.118031   49120 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:09.118065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:08:09.118167   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.118318   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.118615   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.120610   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121054   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:09.121088   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:09.121290   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:08:09.121461   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:08:09.121627   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:08:09.121770   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:08:09.234065   49120 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:09.240751   49120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:09.393966   49120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:09.401672   49120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:09.401767   49120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:09.426073   49120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:09.426099   49120 start.go:475] detecting cgroup driver to use...
	I0213 23:08:09.426172   49120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:09.446114   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:09.461330   49120 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:09.461404   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:09.475964   49120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:09.490801   49120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:09.621898   49120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:09.747413   49120 docker.go:233] disabling docker service ...
	I0213 23:08:09.747470   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:09.766642   49120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:09.783116   49120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:09.910634   49120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:10.052181   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:10.066413   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:10.089436   49120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:10.089505   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.100366   49120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:10.100453   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.111681   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.122231   49120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:10.132945   49120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:10.146287   49120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:10.156405   49120 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:10.156481   49120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:10.172152   49120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:10.182862   49120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:10.315633   49120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:10.509774   49120 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:10.509878   49120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:10.514924   49120 start.go:543] Will wait 60s for crictl version
	I0213 23:08:10.515016   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.518898   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:10.558596   49120 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:10.558695   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.611876   49120 ssh_runner.go:195] Run: crio --version
	I0213 23:08:10.664604   49120 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:08:10.665908   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetIP
	I0213 23:08:10.669029   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669393   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:08:10.669442   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:08:10.669676   49120 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:10.673975   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:10.686760   49120 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:08:10.686830   49120 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:10.730784   49120 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:08:10.730813   49120 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:08:10.730900   49120 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.730903   49120 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.730909   49120 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.730914   49120 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.731026   49120 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.731083   49120 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.731131   49120 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.731497   49120 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732506   49120 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.732511   49120 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.732513   49120 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:10.732543   49120 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0213 23:08:10.732577   49120 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:10.732597   49120 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.732719   49120 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.732759   49120 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:10.880038   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.891830   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0213 23:08:10.905668   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:10.930079   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:10.940850   49120 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0213 23:08:10.940894   49120 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:10.940941   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:10.942664   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:10.985299   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.011467   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.040720   49120 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.099497   49120 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0213 23:08:11.099544   49120 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0213 23:08:11.099577   49120 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.099614   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0213 23:08:11.099636   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099651   49120 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0213 23:08:11.099683   49120 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.099711   49120 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0213 23:08:11.099740   49120 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.099746   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099760   49120 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0213 23:08:11.099767   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099782   49120 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.099547   49120 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.099901   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.099916   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.107567   49120 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0213 23:08:11.107614   49120 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.107675   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:08:11.119038   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0213 23:08:11.157701   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0213 23:08:11.157799   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0213 23:08:11.157722   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0213 23:08:11.157768   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0213 23:08:11.157830   49120 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:08:11.157919   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0213 23:08:11.158002   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.200990   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0213 23:08:11.201117   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:11.299985   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.300039   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0213 23:08:11.300041   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300130   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:11.300137   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300148   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0213 23:08:11.300163   49120 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300198   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:11.300209   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:11.300216   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0213 23:08:11.300203   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0213 23:08:11.300098   49120 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300293   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:11.300096   49120 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:11.318252   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0213 23:08:11.318307   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318355   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0213 23:08:11.318520   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0213 23:08:10.406170   49443 main.go:141] libmachine: (embed-certs-340656) Waiting to get IP...
	I0213 23:08:10.407139   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.407616   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.407692   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.407598   50262 retry.go:31] will retry after 193.299479ms: waiting for machine to come up
	I0213 23:08:10.603143   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.603673   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.603696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.603627   50262 retry.go:31] will retry after 369.099644ms: waiting for machine to come up
	I0213 23:08:10.974421   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:10.974922   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:10.974953   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:10.974870   50262 retry.go:31] will retry after 418.956642ms: waiting for machine to come up
	I0213 23:08:11.395489   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:11.395974   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:11.396005   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:11.395937   50262 retry.go:31] will retry after 610.320769ms: waiting for machine to come up
	I0213 23:08:12.007695   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.008167   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.008198   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.008115   50262 retry.go:31] will retry after 624.461953ms: waiting for machine to come up
	I0213 23:08:12.634088   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:12.634519   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:12.634552   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:12.634467   50262 retry.go:31] will retry after 903.217503ms: waiting for machine to come up
	I0213 23:08:13.539114   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:13.539683   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:13.539725   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:13.539611   50262 retry.go:31] will retry after 747.647967ms: waiting for machine to come up
	I0213 23:08:14.288632   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:14.289301   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:14.289338   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:14.289236   50262 retry.go:31] will retry after 1.415080779s: waiting for machine to come up
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.810648669s)
	I0213 23:08:15.110937   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0213 23:08:15.110899   49120 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (3.810587707s)
	I0213 23:08:15.110961   49120 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:15.110969   49120 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0213 23:08:15.111009   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0213 23:08:17.178104   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.067071549s)
	I0213 23:08:17.178130   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0213 23:08:17.178156   49120 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:17.178204   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0213 23:08:15.706329   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:15.706863   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:15.706901   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:15.706769   50262 retry.go:31] will retry after 1.500671136s: waiting for machine to come up
	I0213 23:08:17.209706   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:17.210252   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:17.210278   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:17.210198   50262 retry.go:31] will retry after 1.743342291s: waiting for machine to come up
	I0213 23:08:18.956397   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:18.956934   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:18.956971   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:18.956874   50262 retry.go:31] will retry after 2.095777111s: waiting for machine to come up
	I0213 23:08:18.227625   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.049388261s)
	I0213 23:08:18.227663   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0213 23:08:18.227691   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:18.227756   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0213 23:08:21.120783   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.892997016s)
	I0213 23:08:21.120823   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0213 23:08:21.120854   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.120908   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0213 23:08:21.055630   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:21.056028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:21.056106   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:21.056004   50262 retry.go:31] will retry after 3.144708692s: waiting for machine to come up
	I0213 23:08:24.202158   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:24.202562   49443 main.go:141] libmachine: (embed-certs-340656) DBG | unable to find current IP address of domain embed-certs-340656 in network mk-embed-certs-340656
	I0213 23:08:24.202584   49443 main.go:141] libmachine: (embed-certs-340656) DBG | I0213 23:08:24.202515   50262 retry.go:31] will retry after 3.072407019s: waiting for machine to come up
	I0213 23:08:23.793772   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.672817599s)
	I0213 23:08:23.793813   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0213 23:08:23.793841   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:23.793916   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0213 23:08:25.866352   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.072399119s)
	I0213 23:08:25.866388   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0213 23:08:25.866422   49120 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:25.866469   49120 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0213 23:08:27.315469   49120 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.44897051s)
	I0213 23:08:27.315502   49120 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0213 23:08:27.315534   49120 cache_images.go:123] Successfully loaded all cached images
	I0213 23:08:27.315540   49120 cache_images.go:92] LoadImages completed in 16.584715329s
	I0213 23:08:27.315650   49120 ssh_runner.go:195] Run: crio config
	I0213 23:08:27.383180   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:27.383203   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:27.383224   49120 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:27.383249   49120 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.31 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-778731 NodeName:no-preload-778731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:27.383445   49120 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-778731"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:27.383545   49120 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-778731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:27.383606   49120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:08:27.393312   49120 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:27.393384   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:27.401513   49120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0213 23:08:27.419705   49120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:08:27.439236   49120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0213 23:08:27.457026   49120 ssh_runner.go:195] Run: grep 192.168.83.31	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:27.461679   49120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:27.474701   49120 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731 for IP: 192.168.83.31
	I0213 23:08:27.474740   49120 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:27.474922   49120 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:27.474966   49120 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:27.475042   49120 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.key
	I0213 23:08:27.475102   49120 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key.049c2370
	I0213 23:08:27.475138   49120 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key
	I0213 23:08:27.475241   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:27.475271   49120 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:27.475281   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:27.475305   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:27.475326   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:27.475360   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:27.475401   49120 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:27.475997   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:27.500212   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:27.526078   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:27.552892   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:27.579169   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:27.603962   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:27.628862   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:27.653046   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:27.681039   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:27.708026   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:28.658782   49715 start.go:369] acquired machines lock for "default-k8s-diff-port-083863" in 3m25.907988779s
	I0213 23:08:28.658844   49715 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:28.658851   49715 fix.go:54] fixHost starting: 
	I0213 23:08:28.659235   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:28.659276   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:28.677314   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37457
	I0213 23:08:28.677718   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:28.678315   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:08:28.678355   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:28.678727   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:28.678935   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:28.679109   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:08:28.680868   49715 fix.go:102] recreateIfNeeded on default-k8s-diff-port-083863: state=Stopped err=<nil>
	I0213 23:08:28.680915   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	W0213 23:08:28.681100   49715 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:28.683083   49715 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-083863" ...
	I0213 23:08:27.278610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279033   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has current primary IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.279068   49443 main.go:141] libmachine: (embed-certs-340656) Found IP for machine: 192.168.61.56
	I0213 23:08:27.279085   49443 main.go:141] libmachine: (embed-certs-340656) Reserving static IP address...
	I0213 23:08:27.279524   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.279553   49443 main.go:141] libmachine: (embed-certs-340656) Reserved static IP address: 192.168.61.56
	I0213 23:08:27.279572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | skip adding static IP to network mk-embed-certs-340656 - found existing host DHCP lease matching {name: "embed-certs-340656", mac: "52:54:00:72:e3:24", ip: "192.168.61.56"}
	I0213 23:08:27.279592   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Getting to WaitForSSH function...
	I0213 23:08:27.279609   49443 main.go:141] libmachine: (embed-certs-340656) Waiting for SSH to be available...
	I0213 23:08:27.282041   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282383   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.282417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.282517   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH client type: external
	I0213 23:08:27.282548   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa (-rw-------)
	I0213 23:08:27.282582   49443 main.go:141] libmachine: (embed-certs-340656) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:27.282598   49443 main.go:141] libmachine: (embed-certs-340656) DBG | About to run SSH command:
	I0213 23:08:27.282688   49443 main.go:141] libmachine: (embed-certs-340656) DBG | exit 0
	I0213 23:08:27.374230   49443 main.go:141] libmachine: (embed-certs-340656) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:27.374589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetConfigRaw
	I0213 23:08:27.375330   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.378273   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378648   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.378682   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.378917   49443 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/config.json ...
	I0213 23:08:27.379092   49443 machine.go:88] provisioning docker machine ...
	I0213 23:08:27.379109   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:27.379298   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379491   49443 buildroot.go:166] provisioning hostname "embed-certs-340656"
	I0213 23:08:27.379521   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.379667   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.382028   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382351   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.382404   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.382562   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.382728   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.382880   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.383023   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.383213   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.383662   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.383682   49443 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-340656 && echo "embed-certs-340656" | sudo tee /etc/hostname
	I0213 23:08:27.526044   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-340656
	
	I0213 23:08:27.526075   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.529185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529526   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.529556   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.529660   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.529852   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530056   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.530203   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.530356   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:27.530695   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:27.530725   49443 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-340656' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-340656/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-340656' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:27.664926   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:27.664966   49443 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:27.664993   49443 buildroot.go:174] setting up certificates
	I0213 23:08:27.665004   49443 provision.go:83] configureAuth start
	I0213 23:08:27.665019   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetMachineName
	I0213 23:08:27.665429   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:27.668520   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.668912   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.668937   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.669172   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.671996   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672365   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.672411   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.672620   49443 provision.go:138] copyHostCerts
	I0213 23:08:27.672684   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:27.672706   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:27.672778   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:27.672914   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:27.672929   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:27.672966   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:27.673049   49443 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:27.673060   49443 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:27.673089   49443 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:27.673187   49443 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.embed-certs-340656 san=[192.168.61.56 192.168.61.56 localhost 127.0.0.1 minikube embed-certs-340656]
	I0213 23:08:27.924954   49443 provision.go:172] copyRemoteCerts
	I0213 23:08:27.925011   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:27.925033   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:27.928037   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928376   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:27.928410   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:27.928588   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:27.928779   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:27.928960   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:27.929085   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.019335   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:28.043949   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0213 23:08:28.066824   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:08:28.089010   49443 provision.go:86] duration metric: configureAuth took 423.986916ms
	I0213 23:08:28.089043   49443 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:28.089251   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:28.089316   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.091655   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.091955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.091984   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.092151   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.092310   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092440   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.092553   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.092694   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.092999   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.093014   49443 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:28.402931   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:28.402953   49443 machine.go:91] provisioned docker machine in 1.023849221s
	I0213 23:08:28.402962   49443 start.go:300] post-start starting for "embed-certs-340656" (driver="kvm2")
	I0213 23:08:28.402972   49443 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:28.402986   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.403246   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:28.403266   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.405815   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.406201   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.406331   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.406514   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.406703   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.406867   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.500638   49443 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:28.504820   49443 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:28.504839   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:28.504899   49443 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:28.504967   49443 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:28.505051   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:28.514593   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:28.536607   49443 start.go:303] post-start completed in 133.632311ms
	I0213 23:08:28.536653   49443 fix.go:56] fixHost completed within 19.429451259s
	I0213 23:08:28.536673   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.539355   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539715   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.539739   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.539914   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.540115   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540275   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.540420   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.540581   49443 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:28.540917   49443 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.56 22 <nil> <nil>}
	I0213 23:08:28.540932   49443 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:28.658649   49443 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865708.631208852
	
	I0213 23:08:28.658674   49443 fix.go:206] guest clock: 1707865708.631208852
	I0213 23:08:28.658682   49443 fix.go:219] Guest: 2024-02-13 23:08:28.631208852 +0000 UTC Remote: 2024-02-13 23:08:28.536657964 +0000 UTC m=+254.042699377 (delta=94.550888ms)
	I0213 23:08:28.658701   49443 fix.go:190] guest clock delta is within tolerance: 94.550888ms
	I0213 23:08:28.658707   49443 start.go:83] releasing machines lock for "embed-certs-340656", held for 19.551560323s
	I0213 23:08:28.658730   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.658982   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:28.662069   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662449   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.662480   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.662651   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663245   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663430   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:08:28.663521   49443 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:28.663567   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.663688   49443 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:28.663712   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:08:28.666417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666696   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.666867   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.666900   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667039   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667161   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:28.667185   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:28.667234   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:08:28.667418   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667467   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:08:28.667518   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.667589   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:08:28.667736   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:08:28.782794   49443 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:28.788743   49443 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:28.933478   49443 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:28.940543   49443 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:28.940632   49443 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:28.958972   49443 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:28.958994   49443 start.go:475] detecting cgroup driver to use...
	I0213 23:08:28.959084   49443 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:28.977833   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:28.996142   49443 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:28.996205   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:29.015509   49443 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:29.029839   49443 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:29.140405   49443 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:29.265524   49443 docker.go:233] disabling docker service ...
	I0213 23:08:29.265597   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:29.283479   49443 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:29.300116   49443 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:29.428731   49443 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:29.555072   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:29.569803   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:29.589259   49443 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:29.589329   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.600653   49443 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:29.600732   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.612313   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.624637   49443 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:29.636279   49443 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:29.648496   49443 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:29.658957   49443 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:29.659020   49443 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:29.673605   49443 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:29.684589   49443 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:29.800899   49443 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:29.989345   49443 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:29.989423   49443 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:29.995420   49443 start.go:543] Will wait 60s for crictl version
	I0213 23:08:29.995489   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:08:30.000012   49443 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:30.047026   49443 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:30.047114   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.095456   49443 ssh_runner.go:195] Run: crio --version
	I0213 23:08:30.146027   49443 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:28.684576   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Start
	I0213 23:08:28.684757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring networks are active...
	I0213 23:08:28.685582   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network default is active
	I0213 23:08:28.685942   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Ensuring network mk-default-k8s-diff-port-083863 is active
	I0213 23:08:28.686429   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Getting domain xml...
	I0213 23:08:28.687208   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Creating domain...
	I0213 23:08:30.003148   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting to get IP...
	I0213 23:08:30.004175   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004634   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.004725   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.004599   50394 retry.go:31] will retry after 210.109414ms: waiting for machine to come up
	I0213 23:08:30.215983   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216407   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.216439   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.216359   50394 retry.go:31] will retry after 367.743906ms: waiting for machine to come up
	I0213 23:08:30.586081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586629   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.586663   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.586583   50394 retry.go:31] will retry after 342.736609ms: waiting for machine to come up
	I0213 23:08:30.931248   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931707   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:30.931738   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:30.931656   50394 retry.go:31] will retry after 597.326691ms: waiting for machine to come up
	I0213 23:08:31.530395   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530818   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:31.530848   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:31.530767   50394 retry.go:31] will retry after 749.518323ms: waiting for machine to come up
	I0213 23:08:32.281688   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282102   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:32.282138   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:32.282052   50394 retry.go:31] will retry after 760.722423ms: waiting for machine to come up
	I0213 23:08:27.731687   49120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:27.755515   49120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:27.774677   49120 ssh_runner.go:195] Run: openssl version
	I0213 23:08:27.780042   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:27.789684   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794384   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.794443   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:27.800052   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:27.809570   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:27.818781   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823148   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.823241   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:27.829043   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:27.839290   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:27.849614   49120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854661   49120 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.854720   49120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:27.860365   49120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:27.870548   49120 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:27.874967   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:27.880745   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:27.886409   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:27.892063   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:27.897857   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:27.903804   49120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:27.909720   49120 kubeadm.go:404] StartCluster: {Name:no-preload-778731 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-778731 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:27.909833   49120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:27.909924   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:27.951061   49120 cri.go:89] found id: ""
	I0213 23:08:27.951158   49120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:27.961916   49120 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:27.961941   49120 kubeadm.go:636] restartCluster start
	I0213 23:08:27.961993   49120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:27.971549   49120 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:27.972633   49120 kubeconfig.go:92] found "no-preload-778731" server: "https://192.168.83.31:8443"
	I0213 23:08:27.975092   49120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:27.983592   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:27.983650   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:27.993448   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.483988   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.484086   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.499804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:28.984581   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:28.984671   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:28.995887   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.484572   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.484680   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.496906   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:29.984503   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:29.984569   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:29.997813   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.484312   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.484391   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.501606   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.984144   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:30.984237   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:30.999418   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.483900   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.483977   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.498536   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:31.983688   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:31.983783   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:31.998804   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:32.484556   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.484662   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:32.499238   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:30.147474   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetIP
	I0213 23:08:30.150438   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.150826   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:08:30.150857   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:08:30.151054   49443 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:30.155517   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:30.168463   49443 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:30.168543   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:30.210212   49443 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:30.210296   49443 ssh_runner.go:195] Run: which lz4
	I0213 23:08:30.214665   49443 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:30.219355   49443 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:30.219383   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:32.244671   49443 crio.go:444] Took 2.030037 seconds to copy over tarball
	I0213 23:08:32.244757   49443 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:33.043974   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044478   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:33.044512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:33.044417   50394 retry.go:31] will retry after 1.030870704s: waiting for machine to come up
	I0213 23:08:34.077209   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077662   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:34.077692   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:34.077625   50394 retry.go:31] will retry after 1.450536952s: waiting for machine to come up
	I0213 23:08:35.529659   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530101   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:35.530135   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:35.530042   50394 retry.go:31] will retry after 1.82898716s: waiting for machine to come up
	I0213 23:08:37.360889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361314   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:37.361343   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:37.361270   50394 retry.go:31] will retry after 1.83473409s: waiting for machine to come up
	I0213 23:08:32.984096   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:32.984203   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.001189   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.483705   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.499694   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:33.983927   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:33.984057   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:33.999205   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.483708   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.483798   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.498840   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:34.984372   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:34.984461   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:34.999079   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.483661   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.483789   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.497573   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.983985   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:35.984088   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:35.995899   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.484546   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.484660   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.496286   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.983902   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.984113   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.995778   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.484405   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.484518   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.495219   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:35.549721   49443 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.304931423s)
	I0213 23:08:35.549748   49443 crio.go:451] Took 3.305051 seconds to extract the tarball
	I0213 23:08:35.549778   49443 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:35.590195   49443 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:35.640735   49443 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:35.640768   49443 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:35.640850   49443 ssh_runner.go:195] Run: crio config
	I0213 23:08:35.707018   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:35.707046   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:35.707072   49443 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:35.707117   49443 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.56 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-340656 NodeName:embed-certs-340656 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:35.707294   49443 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-340656"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:35.707405   49443 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-340656 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:08:35.707483   49443 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:35.717170   49443 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:35.717251   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:35.726586   49443 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0213 23:08:35.744139   49443 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:35.761480   49443 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0213 23:08:35.779911   49443 ssh_runner.go:195] Run: grep 192.168.61.56	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:35.784152   49443 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:35.799376   49443 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656 for IP: 192.168.61.56
	I0213 23:08:35.799417   49443 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:35.799601   49443 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:35.799657   49443 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:35.799766   49443 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/client.key
	I0213 23:08:35.799859   49443 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key.aef5f426
	I0213 23:08:35.799913   49443 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key
	I0213 23:08:35.800053   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:35.800091   49443 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:35.800107   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:35.800143   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:35.800180   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:35.800215   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:35.800276   49443 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:35.801130   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:35.829983   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:35.856832   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:35.883713   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/embed-certs-340656/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:35.910759   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:35.937208   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:35.963904   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:35.991562   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:36.022900   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:36.049084   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:36.074152   49443 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:36.098863   49443 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:36.115588   49443 ssh_runner.go:195] Run: openssl version
	I0213 23:08:36.120864   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:36.130552   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.134999   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.135068   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:36.140621   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:36.150963   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:36.160917   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165428   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.165472   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:36.171493   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:36.181635   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:36.191753   49443 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196368   49443 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.196444   49443 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:36.201985   49443 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:36.211839   49443 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:36.216608   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:36.222594   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:36.228585   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:36.234646   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:36.240579   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:36.246642   49443 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:36.252961   49443 kubeadm.go:404] StartCluster: {Name:embed-certs-340656 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-340656 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:36.253087   49443 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:36.253149   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:36.297601   49443 cri.go:89] found id: ""
	I0213 23:08:36.297705   49443 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:36.308068   49443 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:36.308094   49443 kubeadm.go:636] restartCluster start
	I0213 23:08:36.308152   49443 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:36.318071   49443 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.319274   49443 kubeconfig.go:92] found "embed-certs-340656" server: "https://192.168.61.56:8443"
	I0213 23:08:36.321573   49443 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:36.331006   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.331059   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.342313   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:36.831994   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:36.832106   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:36.845071   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.331654   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.331724   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.344311   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.831903   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.831999   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.843671   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.331225   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.331337   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.349021   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:38.831196   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:38.831292   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:38.847050   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.332089   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.332162   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.348108   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:39.198188   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198570   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:39.198596   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:39.198528   50394 retry.go:31] will retry after 2.722095348s: waiting for machine to come up
	I0213 23:08:41.923545   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923954   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:41.923985   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:41.923904   50394 retry.go:31] will retry after 2.239772531s: waiting for machine to come up
	I0213 23:08:37.984640   49120 api_server.go:166] Checking apiserver status ...
	I0213 23:08:37.984743   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:37.999300   49120 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:37.999332   49120 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:37.999340   49120 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:37.999349   49120 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:37.999402   49120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:38.046199   49120 cri.go:89] found id: ""
	I0213 23:08:38.046287   49120 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:38.061697   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:38.071295   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:38.071378   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080401   49120 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:38.080438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:38.209853   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.403696   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.193792627s)
	I0213 23:08:39.403733   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.602387   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.703317   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:39.783257   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:39.783347   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.284357   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:40.784437   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.284302   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:41.783582   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.284435   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:42.312653   49120 api_server.go:72] duration metric: took 2.529396171s to wait for apiserver process to appear ...
	I0213 23:08:42.312698   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:42.312719   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:42.313220   49120 api_server.go:269] stopped: https://192.168.83.31:8443/healthz: Get "https://192.168.83.31:8443/healthz": dial tcp 192.168.83.31:8443: connect: connection refused
	I0213 23:08:39.832020   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:39.832156   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:39.848229   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.331855   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.331992   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.347635   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:40.831070   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:40.831185   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:40.847184   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.331346   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.331444   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.346518   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:41.831081   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:41.831160   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:41.846752   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.331298   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.331389   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.348782   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:42.831278   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:42.831373   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:42.846241   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.331807   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.331876   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.346998   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:43.831697   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:43.831792   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:43.843733   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.331647   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.331762   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.343476   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:44.165021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165387   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | unable to find current IP address of domain default-k8s-diff-port-083863 in network mk-default-k8s-diff-port-083863
	I0213 23:08:44.165414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | I0213 23:08:44.165357   50394 retry.go:31] will retry after 2.886798605s: waiting for machine to come up
	I0213 23:08:47.055186   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055880   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Found IP for machine: 192.168.39.3
	I0213 23:08:47.055923   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has current primary IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.055936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserving static IP address...
	I0213 23:08:47.056480   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.056512   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Reserved static IP address: 192.168.39.3
	I0213 23:08:47.056537   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | skip adding static IP to network mk-default-k8s-diff-port-083863 - found existing host DHCP lease matching {name: "default-k8s-diff-port-083863", mac: "52:54:00:7c:77:f5", ip: "192.168.39.3"}
	I0213 23:08:47.056552   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Getting to WaitForSSH function...
	I0213 23:08:47.056567   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Waiting for SSH to be available...
	I0213 23:08:47.059414   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059844   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.059882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.059991   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH client type: external
	I0213 23:08:47.060025   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa (-rw-------)
	I0213 23:08:47.060061   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:08:47.060077   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | About to run SSH command:
	I0213 23:08:47.060093   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | exit 0
	I0213 23:08:47.154417   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | SSH cmd err, output: <nil>: 
	I0213 23:08:47.154807   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetConfigRaw
	I0213 23:08:47.155614   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.158506   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.158979   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.159005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.159297   49715 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/config.json ...
	I0213 23:08:47.159557   49715 machine.go:88] provisioning docker machine ...
	I0213 23:08:47.159577   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:47.159833   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160012   49715 buildroot.go:166] provisioning hostname "default-k8s-diff-port-083863"
	I0213 23:08:47.160038   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.160240   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.163021   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163444   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.163476   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.163705   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.163908   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164070   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.164234   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.164391   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.164762   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.164777   49715 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-083863 && echo "default-k8s-diff-port-083863" | sudo tee /etc/hostname
	I0213 23:08:47.304583   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-083863
	
	I0213 23:08:47.304617   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.307729   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308160   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.308196   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.308345   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.308541   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308713   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.308921   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.309148   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.309520   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.309539   49715 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-083863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-083863/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-083863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:08:47.442924   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:08:47.442958   49715 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:08:47.442989   49715 buildroot.go:174] setting up certificates
	I0213 23:08:47.443006   49715 provision.go:83] configureAuth start
	I0213 23:08:47.443024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetMachineName
	I0213 23:08:47.443287   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:47.446220   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446611   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.446646   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.446821   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.449591   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.449920   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.449989   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.450162   49715 provision.go:138] copyHostCerts
	I0213 23:08:47.450221   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:08:47.450241   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:08:47.450305   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:08:47.450482   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:08:47.450497   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:08:47.450532   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:08:47.450614   49715 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:08:47.450625   49715 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:08:47.450651   49715 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:08:47.450720   49715 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-083863 san=[192.168.39.3 192.168.39.3 localhost 127.0.0.1 minikube default-k8s-diff-port-083863]
	I0213 23:08:47.522550   49715 provision.go:172] copyRemoteCerts
	I0213 23:08:47.522618   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:08:47.522647   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.525731   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526189   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.526230   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.526410   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.526610   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.526814   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.526971   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:47.626666   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:08:42.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.095528   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:46.095564   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:46.095581   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.178470   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.178500   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.313729   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.318658   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.318686   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:46.813274   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:46.819766   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:46.819808   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.313432   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.325228   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:47.325274   49120 api_server.go:103] status: https://192.168.83.31:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:47.813568   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:08:47.819686   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:08:47.829842   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:08:47.829896   49120 api_server.go:131] duration metric: took 5.517189469s to wait for apiserver health ...
	I0213 23:08:47.829907   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:08:47.829915   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:47.831685   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:48.354933   49036 start.go:369] acquired machines lock for "old-k8s-version-245122" in 54.536117689s
	I0213 23:08:48.354988   49036 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:08:48.354996   49036 fix.go:54] fixHost starting: 
	I0213 23:08:48.355410   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:08:48.355447   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:08:48.375953   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0213 23:08:48.376414   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:08:48.376997   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:08:48.377034   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:08:48.377373   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:08:48.377578   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:08:48.377709   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:08:48.379630   49036 fix.go:102] recreateIfNeeded on old-k8s-version-245122: state=Stopped err=<nil>
	I0213 23:08:48.379660   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	W0213 23:08:48.379822   49036 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:08:48.381473   49036 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-245122" ...
	I0213 23:08:44.831390   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:44.831503   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:44.845068   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.331710   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.331800   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.343755   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:45.831306   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:45.831415   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:45.844972   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.331510   49443 api_server.go:166] Checking apiserver status ...
	I0213 23:08:46.331596   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:46.343475   49443 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:46.343509   49443 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:08:46.343520   49443 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:08:46.343532   49443 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:08:46.343595   49443 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:46.388343   49443 cri.go:89] found id: ""
	I0213 23:08:46.388417   49443 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:08:46.403792   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:08:46.413139   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:08:46.413197   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422541   49443 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:08:46.422566   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:46.551204   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.427625   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.656205   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.776652   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:47.860844   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:08:47.860942   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.362058   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:48.861851   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:49.361973   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:47.655867   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0213 23:08:47.687226   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:08:47.719579   49715 provision.go:86] duration metric: configureAuth took 276.554247ms
	I0213 23:08:47.719610   49715 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:08:47.719857   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:08:47.719945   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:47.723023   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723353   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:47.723386   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:47.723686   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:47.723889   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724074   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:47.724299   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:47.724469   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:47.724860   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:47.724878   49715 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:08:48.093490   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:08:48.093519   49715 machine.go:91] provisioned docker machine in 933.948787ms
	I0213 23:08:48.093529   49715 start.go:300] post-start starting for "default-k8s-diff-port-083863" (driver="kvm2")
	I0213 23:08:48.093540   49715 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:08:48.093553   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.093887   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:08:48.093922   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.096941   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097351   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.097385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.097701   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.097936   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.098145   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.098367   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.188626   49715 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:08:48.193282   49715 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:08:48.193320   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:08:48.193406   49715 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:08:48.193500   49715 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:08:48.193597   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:08:48.202782   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:48.235000   49715 start.go:303] post-start completed in 141.454861ms
	I0213 23:08:48.235032   49715 fix.go:56] fixHost completed within 19.576181803s
	I0213 23:08:48.235051   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.238450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.238992   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.239024   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.239320   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.239535   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239683   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.239846   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.240085   49715 main.go:141] libmachine: Using SSH client type: native
	I0213 23:08:48.240390   49715 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0213 23:08:48.240401   49715 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:08:48.354769   49715 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865728.300012904
	
	I0213 23:08:48.354799   49715 fix.go:206] guest clock: 1707865728.300012904
	I0213 23:08:48.354811   49715 fix.go:219] Guest: 2024-02-13 23:08:48.300012904 +0000 UTC Remote: 2024-02-13 23:08:48.235035663 +0000 UTC m=+225.644270499 (delta=64.977241ms)
	I0213 23:08:48.354837   49715 fix.go:190] guest clock delta is within tolerance: 64.977241ms
	I0213 23:08:48.354845   49715 start.go:83] releasing machines lock for "default-k8s-diff-port-083863", held for 19.696026805s
	I0213 23:08:48.354884   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.355246   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:48.358586   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359040   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.359081   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.359323   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.359961   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360127   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:08:48.360200   49715 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:08:48.360233   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.360372   49715 ssh_runner.go:195] Run: cat /version.json
	I0213 23:08:48.360398   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:08:48.363529   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.363715   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364166   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364357   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:48.364394   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:48.364461   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364656   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.364824   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:08:48.364882   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370192   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:08:48.370221   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.370404   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:08:48.370677   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:08:48.457230   49715 ssh_runner.go:195] Run: systemctl --version
	I0213 23:08:48.484954   49715 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:08:48.636752   49715 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:08:48.644369   49715 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:08:48.644452   49715 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:08:48.667562   49715 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:08:48.667594   49715 start.go:475] detecting cgroup driver to use...
	I0213 23:08:48.667684   49715 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:08:48.689737   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:08:48.708806   49715 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:08:48.708876   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:08:48.728530   49715 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:08:48.746819   49715 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:08:48.877519   49715 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:08:49.069574   49715 docker.go:233] disabling docker service ...
	I0213 23:08:49.069661   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:08:49.103853   49715 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:08:49.122356   49715 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:08:49.272225   49715 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:08:49.412111   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:08:49.428799   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:08:49.449679   49715 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:08:49.449734   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.465458   49715 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:08:49.465523   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.480399   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.494161   49715 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:08:49.507964   49715 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:08:49.522486   49715 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:08:49.534468   49715 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:08:49.534538   49715 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:08:49.554260   49715 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:08:49.566868   49715 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:08:49.725125   49715 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:08:49.963096   49715 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:08:49.963172   49715 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:08:49.970420   49715 start.go:543] Will wait 60s for crictl version
	I0213 23:08:49.970508   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:08:49.976177   49715 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:08:50.024316   49715 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:08:50.024407   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.080031   49715 ssh_runner.go:195] Run: crio --version
	I0213 23:08:50.133918   49715 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:08:48.382835   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Start
	I0213 23:08:48.383129   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring networks are active...
	I0213 23:08:48.384069   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network default is active
	I0213 23:08:48.384458   49036 main.go:141] libmachine: (old-k8s-version-245122) Ensuring network mk-old-k8s-version-245122 is active
	I0213 23:08:48.385051   49036 main.go:141] libmachine: (old-k8s-version-245122) Getting domain xml...
	I0213 23:08:48.387192   49036 main.go:141] libmachine: (old-k8s-version-245122) Creating domain...
	I0213 23:08:49.933195   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting to get IP...
	I0213 23:08:49.934463   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:49.935084   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:49.935109   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:49.934961   50565 retry.go:31] will retry after 206.578168ms: waiting for machine to come up
	I0213 23:08:50.143704   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.144239   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.144263   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.144177   50565 retry.go:31] will retry after 378.113433ms: waiting for machine to come up
	I0213 23:08:50.524043   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.524670   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.524703   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.524629   50565 retry.go:31] will retry after 468.261692ms: waiting for machine to come up
	I0213 23:08:50.995002   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:50.995616   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:50.995645   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:50.995524   50565 retry.go:31] will retry after 437.792222ms: waiting for machine to come up
	I0213 23:08:50.135427   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetIP
	I0213 23:08:50.139087   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139523   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:08:50.139556   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:08:50.139840   49715 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0213 23:08:50.145191   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:50.159814   49715 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:08:50.159873   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:50.208873   49715 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:08:50.208947   49715 ssh_runner.go:195] Run: which lz4
	I0213 23:08:50.214254   49715 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:08:50.219979   49715 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:08:50.220013   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:08:47.833116   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:47.862550   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:47.895377   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:47.919843   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:47.919894   49120 system_pods.go:61] "coredns-76f75df574-hgzcn" [a384c748-9d5b-4d07-b03c-5a65b3d7a450] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:47.919907   49120 system_pods.go:61] "etcd-no-preload-778731" [44169811-10f1-4d3e-8eaa-b525dd0f722f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:47.919920   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [126febb5-8d0b-4162-b320-7fd718b4a974] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:47.919929   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [a7be9641-1bd0-41f9-853a-73b522c60746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:47.919945   49120 system_pods.go:61] "kube-proxy-msxf7" [81201ce9-6f3d-457c-b582-eb8a17dbf4eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:47.919968   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [72f487c5-c42e-4e42-85c8-3b3df6bccd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:47.919984   49120 system_pods.go:61] "metrics-server-57f55c9bc5-r44rm" [ae0751b9-57fe-4d99-b41c-5c685b846e1c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:47.919996   49120 system_pods.go:61] "storage-provisioner" [e1d157b3-7ce1-488c-a3ea-ab0e8da83fb1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:47.920009   49120 system_pods.go:74] duration metric: took 24.606913ms to wait for pod list to return data ...
	I0213 23:08:47.920031   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:47.930765   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:47.930810   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:47.930827   49120 node_conditions.go:105] duration metric: took 10.783663ms to run NodePressure ...
	I0213 23:08:47.930848   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:48.401055   49120 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407167   49120 kubeadm.go:787] kubelet initialised
	I0213 23:08:48.407238   49120 kubeadm.go:788] duration metric: took 6.148946ms waiting for restarted kubelet to initialise ...
	I0213 23:08:48.407260   49120 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:48.414170   49120 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:50.427883   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:52.431208   49120 pod_ready.go:102] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:49.861114   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.361308   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.861249   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:08:50.894694   49443 api_server.go:72] duration metric: took 3.033850926s to wait for apiserver process to appear ...
	I0213 23:08:50.894724   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:08:50.894746   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:50.895231   49443 api_server.go:269] stopped: https://192.168.61.56:8443/healthz: Get "https://192.168.61.56:8443/healthz": dial tcp 192.168.61.56:8443: connect: connection refused
	I0213 23:08:51.394882   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:51.435131   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:51.435705   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:51.435733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:51.435616   50565 retry.go:31] will retry after 631.237829ms: waiting for machine to come up
	I0213 23:08:52.069120   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.069697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.069719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.069617   50565 retry.go:31] will retry after 756.691364ms: waiting for machine to come up
	I0213 23:08:52.828166   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:52.828631   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:52.828662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:52.828562   50565 retry.go:31] will retry after 761.909065ms: waiting for machine to come up
	I0213 23:08:53.592196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:53.592753   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:53.592779   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:53.592685   50565 retry.go:31] will retry after 1.153412106s: waiting for machine to come up
	I0213 23:08:54.747606   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:54.748184   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:54.748221   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:54.748113   50565 retry.go:31] will retry after 1.198347182s: waiting for machine to come up
	I0213 23:08:55.947978   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:55.948524   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:55.948545   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:55.948469   50565 retry.go:31] will retry after 2.116247229s: waiting for machine to come up
	I0213 23:08:52.713946   49715 crio.go:444] Took 2.499735 seconds to copy over tarball
	I0213 23:08:52.714030   49715 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:08:56.483125   49715 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.769061262s)
	I0213 23:08:56.483156   49715 crio.go:451] Took 3.769175 seconds to extract the tarball
	I0213 23:08:56.483167   49715 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:08:56.524290   49715 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:08:56.576319   49715 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:08:56.576349   49715 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:08:56.576435   49715 ssh_runner.go:195] Run: crio config
	I0213 23:08:56.633481   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:08:56.633514   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:56.633537   49715 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:08:56.633561   49715 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-083863 NodeName:default-k8s-diff-port-083863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:08:56.633744   49715 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-083863"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:08:56.633838   49715 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-083863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0213 23:08:56.633930   49715 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:08:56.643018   49715 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:08:56.643110   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:08:56.652116   49715 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0213 23:08:56.670140   49715 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:08:56.687456   49715 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0213 23:08:56.707317   49715 ssh_runner.go:195] Run: grep 192.168.39.3	control-plane.minikube.internal$ /etc/hosts
	I0213 23:08:56.711339   49715 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:08:56.726090   49715 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863 for IP: 192.168.39.3
	I0213 23:08:56.726139   49715 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:08:56.726320   49715 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:08:56.726381   49715 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:08:56.726486   49715 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.key
	I0213 23:08:56.755690   49715 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key.599d509e
	I0213 23:08:56.755797   49715 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key
	I0213 23:08:56.755953   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:08:56.755996   49715 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:08:56.756008   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:08:56.756042   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:08:56.756072   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:08:56.756104   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:08:56.756157   49715 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:08:56.756999   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:08:56.790072   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:08:56.821182   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:08:56.849753   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:08:56.875241   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:08:56.901057   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:08:56.929989   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:08:56.959488   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:08:56.991678   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:08:57.019756   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:08:57.047743   49715 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:08:57.078812   49715 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:08:57.097081   49715 ssh_runner.go:195] Run: openssl version
	I0213 23:08:57.103754   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:08:57.117364   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124069   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.124160   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:08:57.132252   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:08:57.145398   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:08:57.158348   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164091   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.164158   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:08:57.171693   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:08:57.185004   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:08:57.198410   49715 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204432   49715 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.204495   49715 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:08:57.210331   49715 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:08:57.221567   49715 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:08:57.226357   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:08:57.232307   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:08:57.239034   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:08:57.245485   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:08:57.252782   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:08:57.259406   49715 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:08:57.265644   49715 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-083863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-083863 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:08:57.265744   49715 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:08:57.265820   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:08:57.313129   49715 cri.go:89] found id: ""
	I0213 23:08:57.313210   49715 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:08:57.323716   49715 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:08:57.323747   49715 kubeadm.go:636] restartCluster start
	I0213 23:08:57.323837   49715 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:08:57.333805   49715 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.335100   49715 kubeconfig.go:92] found "default-k8s-diff-port-083863" server: "https://192.168.39.3:8444"
	I0213 23:08:57.337669   49715 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:08:57.347371   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.347434   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.359168   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:53.424206   49120 pod_ready.go:92] pod "coredns-76f75df574-hgzcn" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:53.424235   49120 pod_ready.go:81] duration metric: took 5.01002772s waiting for pod "coredns-76f75df574-hgzcn" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:53.424249   49120 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:55.432858   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:54.636558   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.636595   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.636612   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.714679   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:08:54.714727   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:08:54.894910   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:54.909668   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:54.909716   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.395328   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.401124   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.401155   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:55.895827   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:55.901814   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:55.901848   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.395611   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.402367   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.402404   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:56.894889   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:56.900228   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:56.900267   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.394804   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.404774   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.404811   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:57.895090   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:57.902470   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:08:57.902527   49443 api_server.go:103] status: https://192.168.61.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:08:58.395650   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:08:58.404727   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:08:58.413383   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:08:58.413425   49443 api_server.go:131] duration metric: took 7.518687282s to wait for apiserver health ...
	I0213 23:08:58.413437   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:08:58.413444   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:08:58.415682   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:08:58.417320   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:08:58.436763   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:08:58.468658   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:08:58.482719   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:08:58.482755   49443 system_pods.go:61] "coredns-5dd5756b68-h86p6" [9d274749-fe12-43c1-b30c-70586c04daf2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:08:58.482762   49443 system_pods.go:61] "etcd-embed-certs-340656" [1fbdd834-b8c1-48c9-aab7-3c72d7012eca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:08:58.482770   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [3bb1cfb1-8fea-4b7a-a459-a709010ee6cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:08:58.482783   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [f8035337-1819-4b0b-83eb-1992445c0185] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:08:58.482790   49443 system_pods.go:61] "kube-proxy-swxwt" [2bbc949c-f478-4c01-9e81-884a05a9a0c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:08:58.482795   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [923ef614-eef1-4e32-ae83-2e540841060f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:08:58.482831   49443 system_pods.go:61] "metrics-server-57f55c9bc5-lmcwv" [a948cc5d-01b6-4298-a7c7-24d9704497d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:08:58.482846   49443 system_pods.go:61] "storage-provisioner" [9fc17bde-ff30-4ed7-829c-3d59badd55f3] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:08:58.482854   49443 system_pods.go:74] duration metric: took 14.17202ms to wait for pod list to return data ...
	I0213 23:08:58.482865   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:08:58.487666   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:08:58.487710   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:08:58.487723   49443 node_conditions.go:105] duration metric: took 4.851634ms to run NodePressure ...
	I0213 23:08:58.487743   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:08:59.044504   49443 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088347   49443 kubeadm.go:787] kubelet initialised
	I0213 23:08:59.088379   49443 kubeadm.go:788] duration metric: took 43.842389ms waiting for restarted kubelet to initialise ...
	I0213 23:08:59.088390   49443 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:08:59.105292   49443 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.067162   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:08:58.067629   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:08:58.067662   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:08:58.067589   50565 retry.go:31] will retry after 2.740013841s: waiting for machine to come up
	I0213 23:09:00.811129   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:00.811590   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:00.811623   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:00.811537   50565 retry.go:31] will retry after 3.449503247s: waiting for machine to come up
	I0213 23:08:57.848036   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:57.848128   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:57.863924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.348357   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.348539   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.364081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:58.848249   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:58.848321   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:58.860671   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.348282   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.348385   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.364226   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:59.847737   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:08:59.847838   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:08:59.864832   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.348231   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.348311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.360532   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:00.848115   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:00.848220   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:00.861558   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.348101   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.348192   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.360173   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:01.847696   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:01.847788   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:01.859631   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:02.348255   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.348353   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.363081   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:08:57.943272   49120 pod_ready.go:102] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:08:58.432531   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:08:58.432613   49120 pod_ready.go:81] duration metric: took 5.008354336s waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:08:58.432631   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:00.441099   49120 pod_ready.go:102] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:01.440207   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.440235   49120 pod_ready.go:81] duration metric: took 3.0075951s waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.440249   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446456   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.446483   49120 pod_ready.go:81] duration metric: took 6.224957ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.446495   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452476   49120 pod_ready.go:92] pod "kube-proxy-msxf7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.452509   49120 pod_ready.go:81] duration metric: took 6.006176ms waiting for pod "kube-proxy-msxf7" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.452520   49120 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457619   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:01.457640   49120 pod_ready.go:81] duration metric: took 5.112826ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.457648   49120 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:01.113738   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:03.114003   49443 pod_ready.go:102] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.262520   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:04.262989   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:04.263018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:04.262939   50565 retry.go:31] will retry after 3.540479459s: waiting for machine to come up
	I0213 23:09:02.847964   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:02.848073   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:02.863100   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.347510   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.347608   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.362561   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:03.847536   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:03.847635   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:03.863357   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.347939   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.348026   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.363027   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:04.847491   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:04.847576   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:04.858924   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.347449   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.347527   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.359307   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:05.847845   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:05.847934   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:05.859530   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.348136   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.348231   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.360149   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:06.847699   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:06.847786   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:06.859859   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.347717   49715 api_server.go:166] Checking apiserver status ...
	I0213 23:09:07.347806   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:07.360175   49715 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:07.360211   49715 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:07.360223   49715 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:07.360234   49715 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:07.360304   49715 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:07.400269   49715 cri.go:89] found id: ""
	I0213 23:09:07.400360   49715 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:07.416990   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:07.426513   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:07.426588   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436165   49715 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:07.436197   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:07.602305   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:03.467176   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:05.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:04.614199   49443 pod_ready.go:92] pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:04.614230   49443 pod_ready.go:81] duration metric: took 5.508903545s waiting for pod "coredns-5dd5756b68-h86p6" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:04.614244   49443 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:06.621198   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:08.622226   49443 pod_ready.go:102] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:07.807018   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:07.807577   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | unable to find current IP address of domain old-k8s-version-245122 in network mk-old-k8s-version-245122
	I0213 23:09:07.807609   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | I0213 23:09:07.807519   50565 retry.go:31] will retry after 4.623412618s: waiting for machine to come up
	I0213 23:09:08.566096   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.757816   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.894570   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:08.984493   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:08.984609   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.485363   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:09.984792   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.485221   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:10.985649   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.485311   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:11.516028   49715 api_server.go:72] duration metric: took 2.531534981s to wait for apiserver process to appear ...
	I0213 23:09:11.516054   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:11.516076   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:08.466006   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.965586   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:10.623965   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.623991   49443 pod_ready.go:81] duration metric: took 6.009738992s waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.624002   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631790   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.631813   49443 pod_ready.go:81] duration metric: took 7.802592ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.631830   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638042   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.638065   49443 pod_ready.go:81] duration metric: took 6.226067ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.638077   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645111   49443 pod_ready.go:92] pod "kube-proxy-swxwt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.645135   49443 pod_ready.go:81] duration metric: took 7.051124ms waiting for pod "kube-proxy-swxwt" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.645146   49443 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651681   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:10.651703   49443 pod_ready.go:81] duration metric: took 6.550486ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:10.651712   49443 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:12.659172   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:12.435133   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435720   49036 main.go:141] libmachine: (old-k8s-version-245122) Found IP for machine: 192.168.50.36
	I0213 23:09:12.435751   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has current primary IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.435762   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserving static IP address...
	I0213 23:09:12.436196   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.436241   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | skip adding static IP to network mk-old-k8s-version-245122 - found existing host DHCP lease matching {name: "old-k8s-version-245122", mac: "52:54:00:13:86:ab", ip: "192.168.50.36"}
	I0213 23:09:12.436262   49036 main.go:141] libmachine: (old-k8s-version-245122) Reserved static IP address: 192.168.50.36
	I0213 23:09:12.436280   49036 main.go:141] libmachine: (old-k8s-version-245122) Waiting for SSH to be available...
	I0213 23:09:12.436296   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Getting to WaitForSSH function...
	I0213 23:09:12.438534   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.438892   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.438925   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.439062   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH client type: external
	I0213 23:09:12.439099   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa (-rw-------)
	I0213 23:09:12.439149   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:09:12.439183   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | About to run SSH command:
	I0213 23:09:12.439202   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | exit 0
	I0213 23:09:12.541930   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | SSH cmd err, output: <nil>: 
	I0213 23:09:12.542357   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetConfigRaw
	I0213 23:09:12.543071   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.546226   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546714   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.546747   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.546955   49036 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/config.json ...
	I0213 23:09:12.547163   49036 machine.go:88] provisioning docker machine ...
	I0213 23:09:12.547200   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:12.547445   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547594   49036 buildroot.go:166] provisioning hostname "old-k8s-version-245122"
	I0213 23:09:12.547615   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.547770   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.550250   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550697   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.550734   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.550939   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.551160   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551322   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.551471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.551648   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.551974   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.552000   49036 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-245122 && echo "old-k8s-version-245122" | sudo tee /etc/hostname
	I0213 23:09:12.705495   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-245122
	
	I0213 23:09:12.705528   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.708503   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.708860   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.708893   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.709092   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.709277   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.709657   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.709831   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:12.710263   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:12.710285   49036 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-245122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-245122/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-245122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:09:12.858225   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:09:12.858266   49036 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:09:12.858287   49036 buildroot.go:174] setting up certificates
	I0213 23:09:12.858300   49036 provision.go:83] configureAuth start
	I0213 23:09:12.858313   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetMachineName
	I0213 23:09:12.858624   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:12.861374   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861727   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.861759   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.861862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.864007   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864334   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.864370   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.864549   49036 provision.go:138] copyHostCerts
	I0213 23:09:12.864627   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:09:12.864643   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:09:12.864728   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:09:12.864853   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:09:12.864868   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:09:12.864904   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:09:12.865008   49036 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:09:12.865018   49036 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:09:12.865049   49036 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:09:12.865130   49036 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-245122 san=[192.168.50.36 192.168.50.36 localhost 127.0.0.1 minikube old-k8s-version-245122]
	I0213 23:09:12.938444   49036 provision.go:172] copyRemoteCerts
	I0213 23:09:12.938508   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:09:12.938530   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:12.941384   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941719   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:12.941758   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:12.941989   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:12.942202   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:12.942394   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:12.942545   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.041212   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:09:13.069849   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0213 23:09:13.092979   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:09:13.115949   49036 provision.go:86] duration metric: configureAuth took 257.625697ms
	I0213 23:09:13.115983   49036 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:09:13.116196   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:13.116279   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.119207   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119644   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.119684   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.119901   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.120096   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120288   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.120443   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.120599   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.121149   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.121179   49036 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:09:13.453399   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:09:13.453431   49036 machine.go:91] provisioned docker machine in 906.25243ms
	I0213 23:09:13.453444   49036 start.go:300] post-start starting for "old-k8s-version-245122" (driver="kvm2")
	I0213 23:09:13.453459   49036 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:09:13.453479   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.453816   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:09:13.453849   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.457033   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457355   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.457388   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.457560   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.457778   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.457991   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.458207   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.559903   49036 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:09:13.566012   49036 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:09:13.566046   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:09:13.566119   49036 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:09:13.566215   49036 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:09:13.566336   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:09:13.578878   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:13.610396   49036 start.go:303] post-start completed in 156.935564ms
	I0213 23:09:13.610434   49036 fix.go:56] fixHost completed within 25.25543712s
	I0213 23:09:13.610459   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.613960   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614271   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.614330   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.614575   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.614828   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615081   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.615275   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.615494   49036 main.go:141] libmachine: Using SSH client type: native
	I0213 23:09:13.615954   49036 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0213 23:09:13.615977   49036 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:09:13.759068   49036 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707865753.693690059
	
	I0213 23:09:13.759095   49036 fix.go:206] guest clock: 1707865753.693690059
	I0213 23:09:13.759106   49036 fix.go:219] Guest: 2024-02-13 23:09:13.693690059 +0000 UTC Remote: 2024-02-13 23:09:13.610438113 +0000 UTC m=+362.380845041 (delta=83.251946ms)
	I0213 23:09:13.759130   49036 fix.go:190] guest clock delta is within tolerance: 83.251946ms
	I0213 23:09:13.759136   49036 start.go:83] releasing machines lock for "old-k8s-version-245122", held for 25.404173426s
	I0213 23:09:13.759161   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.759480   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:13.762537   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.762928   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.762967   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.763172   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763718   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763907   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:13.763998   49036 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:09:13.764050   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.764122   49036 ssh_runner.go:195] Run: cat /version.json
	I0213 23:09:13.764149   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:13.767081   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767387   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767526   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767558   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.767736   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.767812   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:13.767834   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:13.768002   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:13.768190   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:13.768220   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768343   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:13.768370   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.768490   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:13.886145   49036 ssh_runner.go:195] Run: systemctl --version
	I0213 23:09:13.892222   49036 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:09:14.044107   49036 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:09:14.051031   49036 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:09:14.051134   49036 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:09:14.071908   49036 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:09:14.071942   49036 start.go:475] detecting cgroup driver to use...
	I0213 23:09:14.072026   49036 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:09:14.091007   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:09:14.105419   49036 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:09:14.105501   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:09:14.120760   49036 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:09:14.135296   49036 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:09:14.267338   49036 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:09:14.403936   49036 docker.go:233] disabling docker service ...
	I0213 23:09:14.404023   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:09:14.419791   49036 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:09:14.434449   49036 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:09:14.569365   49036 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:09:14.700619   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:09:14.718646   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:09:14.738870   49036 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0213 23:09:14.738944   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.750436   49036 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:09:14.750529   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.762397   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.773950   49036 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:09:14.786798   49036 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:09:14.801457   49036 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:09:14.813254   49036 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:09:14.813331   49036 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:09:14.830374   49036 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:09:14.840984   49036 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:09:14.994777   49036 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:09:15.193564   49036 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:09:15.193657   49036 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:09:15.200616   49036 start.go:543] Will wait 60s for crictl version
	I0213 23:09:15.200749   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:15.205888   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:09:15.249751   49036 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:09:15.249884   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.302320   49036 ssh_runner.go:195] Run: crio --version
	I0213 23:09:15.361046   49036 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0213 23:09:15.362396   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetIP
	I0213 23:09:15.365548   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366008   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:15.366041   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:15.366287   49036 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:09:15.370727   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:15.384064   49036 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 23:09:15.384171   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:15.432027   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:15.432110   49036 ssh_runner.go:195] Run: which lz4
	I0213 23:09:15.436393   49036 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:09:15.440914   49036 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:09:15.440956   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0213 23:09:15.218410   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:15.218442   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:15.218457   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.346077   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.346112   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:15.516188   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:15.523339   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:15.523371   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.016747   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.024910   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.024944   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:16.516538   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:16.528640   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0213 23:09:16.528673   49715 api_server.go:103] status: https://192.168.39.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0213 23:09:17.016269   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:09:17.022413   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:09:17.033775   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:09:17.033807   49715 api_server.go:131] duration metric: took 5.51774459s to wait for apiserver health ...
	I0213 23:09:17.033819   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:09:17.033828   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:17.035635   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:17.037195   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:17.064472   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:17.115519   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:17.133771   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:09:17.133887   49715 system_pods.go:61] "coredns-5dd5756b68-cvtjg" [507ded52-9061-4ab7-8298-31847da5dad3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0213 23:09:17.133914   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [2ef46644-d4d0-4e8c-b2aa-4e154780be70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0213 23:09:17.133952   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [c1f51407-cfd9-4329-9153-2dacb87952c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0213 23:09:17.133975   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [1ad24825-8c75-4220-a316-2dd4826da8fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0213 23:09:17.133995   49715 system_pods.go:61] "kube-proxy-zzskr" [fb71ceb1-9f9a-4c8b-ae1e-1eeb91706110] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0213 23:09:17.134015   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [4500697c-7313-4217-9843-14edb2c7fdb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0213 23:09:17.134042   49715 system_pods.go:61] "metrics-server-57f55c9bc5-p97jh" [dc549bc9-87e4-4cb6-99b5-e937f2916d6d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:09:17.134063   49715 system_pods.go:61] "storage-provisioner" [c5ad957d-09f9-46e7-b0e7-e7c0b13f671f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0213 23:09:17.134081   49715 system_pods.go:74] duration metric: took 18.533785ms to wait for pod list to return data ...
	I0213 23:09:17.134103   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:17.145025   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:17.145131   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:17.145159   49715 node_conditions.go:105] duration metric: took 11.041762ms to run NodePressure ...
	I0213 23:09:17.145201   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:13.466367   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:15.966324   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:14.661158   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:16.663448   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:19.164418   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.224597   49036 crio.go:444] Took 1.788234 seconds to copy over tarball
	I0213 23:09:17.224685   49036 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:09:20.618866   49036 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.394137292s)
	I0213 23:09:20.618905   49036 crio.go:451] Took 3.394273 seconds to extract the tarball
	I0213 23:09:20.618918   49036 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:09:20.665417   49036 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:09:20.718004   49036 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0213 23:09:20.718036   49036 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.718175   49036 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.718201   49036 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.718126   49036 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0213 23:09:20.718131   49036 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.718148   49036 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.718154   49036 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.718181   49036 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719739   49036 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.719784   49036 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.719745   49036 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:20.719855   49036 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.719951   49036 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.720062   49036 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0213 23:09:20.720172   49036 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:20.720184   49036 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.877532   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.894803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:20.906336   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:20.909341   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:20.910608   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0213 23:09:20.933612   49036 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0213 23:09:20.933664   49036 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0213 23:09:20.933724   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:20.947803   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:20.979922   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.026909   49036 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0213 23:09:21.026953   49036 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.026986   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.034243   49036 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0213 23:09:21.034279   49036 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.034321   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.053547   49036 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:21.068143   49036 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0213 23:09:21.068194   49036 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0213 23:09:21.068228   49036 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0213 23:09:21.068195   49036 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.068276   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0213 23:09:21.068318   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.110630   49036 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0213 23:09:21.110695   49036 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.110747   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.120732   49036 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0213 23:09:21.120777   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0213 23:09:21.120781   49036 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.120851   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0213 23:09:21.120887   49036 ssh_runner.go:195] Run: which crictl
	I0213 23:09:21.272660   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0213 23:09:21.272723   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0213 23:09:21.272771   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0213 23:09:21.272813   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0213 23:09:21.272858   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0213 23:09:21.272914   49036 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0213 23:09:21.272966   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0213 23:09:17.706218   49715 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713293   49715 kubeadm.go:787] kubelet initialised
	I0213 23:09:17.713322   49715 kubeadm.go:788] duration metric: took 7.076014ms waiting for restarted kubelet to initialise ...
	I0213 23:09:17.713332   49715 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:17.724146   49715 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:19.733686   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.412892   49715 pod_ready.go:102] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:17.970757   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:20.466081   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:22.467149   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.660264   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:23.660813   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:21.375314   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0213 23:09:21.376306   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0213 23:09:21.376453   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0213 23:09:21.376491   49036 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0213 23:09:21.585135   49036 cache_images.go:92] LoadImages completed in 867.071904ms
	W0213 23:09:21.585230   49036 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18171-8990/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0213 23:09:21.585316   49036 ssh_runner.go:195] Run: crio config
	I0213 23:09:21.650741   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:21.650767   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:21.650789   49036 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:09:21.650812   49036 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-245122 NodeName:old-k8s-version-245122 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0213 23:09:21.650991   49036 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-245122"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-245122
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.50.36:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:09:21.651106   49036 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-245122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:09:21.651173   49036 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0213 23:09:21.662478   49036 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:09:21.662558   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:09:21.672654   49036 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0213 23:09:21.690609   49036 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:09:21.708199   49036 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0213 23:09:21.728361   49036 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0213 23:09:21.732450   49036 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:09:21.747349   49036 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122 for IP: 192.168.50.36
	I0213 23:09:21.747391   49036 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:21.747532   49036 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:09:21.747582   49036 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:09:21.747644   49036 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.key
	I0213 23:09:21.958574   49036 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key.e3c4a843
	I0213 23:09:21.958790   49036 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key
	I0213 23:09:21.958978   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:09:21.959024   49036 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:09:21.959040   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:09:21.959090   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:09:21.959135   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:09:21.959168   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:09:21.959234   49036 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:09:21.960121   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:09:21.986921   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:09:22.011993   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:09:22.038194   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:09:22.064839   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:09:22.089629   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:09:22.116404   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:09:22.141615   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:09:22.167298   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:09:22.194577   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:09:22.220140   49036 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:09:22.245124   49036 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:09:22.265798   49036 ssh_runner.go:195] Run: openssl version
	I0213 23:09:22.273510   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:09:22.287657   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294180   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.294261   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:09:22.300826   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:09:22.313535   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:09:22.324047   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329069   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.329171   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:09:22.335862   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:09:22.347417   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:09:22.358082   49036 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363477   49036 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.363536   49036 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:09:22.369915   49036 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:09:22.380910   49036 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:09:22.385812   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:09:22.392981   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:09:22.400722   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:09:22.409089   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:09:22.417036   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:09:22.423381   49036 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:09:22.430098   49036 kubeadm.go:404] StartCluster: {Name:old-k8s-version-245122 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-245122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:09:22.430177   49036 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:09:22.430246   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:22.490283   49036 cri.go:89] found id: ""
	I0213 23:09:22.490371   49036 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:09:22.500902   49036 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:09:22.500931   49036 kubeadm.go:636] restartCluster start
	I0213 23:09:22.501004   49036 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:09:22.511985   49036 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:22.513298   49036 kubeconfig.go:92] found "old-k8s-version-245122" server: "https://192.168.50.36:8443"
	I0213 23:09:22.516673   49036 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:09:22.526466   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:22.526561   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:22.539541   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.027052   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.027161   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.039390   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.527142   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:23.527234   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:23.539846   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.027048   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.027144   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.038367   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:24.526911   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:24.527012   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:24.538906   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.027095   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.027195   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.038232   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:25.526805   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:25.526911   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:25.540281   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:26.026811   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.026908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.039699   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:23.238007   49715 pod_ready.go:92] pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:23.238035   49715 pod_ready.go:81] duration metric: took 5.513854942s waiting for pod "coredns-5dd5756b68-cvtjg" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:23.238051   49715 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.744985   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:24.745007   49715 pod_ready.go:81] duration metric: took 1.506948533s waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:24.745015   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:26.751610   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:24.965048   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:27.465069   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.159564   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:28.660224   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:26.527051   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:26.527135   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:26.539382   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.026915   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.026990   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.038660   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:27.527300   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:27.527391   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:27.539714   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.027042   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.027124   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.039419   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.527549   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:28.527649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:28.540659   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.027032   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.027134   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.038415   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:29.526595   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:29.526690   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:29.538928   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.027041   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.027119   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.040125   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:30.526693   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:30.526765   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:30.540060   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:31.026988   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.027096   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.039327   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:28.755419   49715 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.254128   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.254154   49715 pod_ready.go:81] duration metric: took 6.509132102s waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.254164   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262007   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.262032   49715 pod_ready.go:81] duration metric: took 7.859557ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.262042   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267937   49715 pod_ready.go:92] pod "kube-proxy-zzskr" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.267959   49715 pod_ready.go:81] duration metric: took 5.911683ms waiting for pod "kube-proxy-zzskr" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.267967   49715 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273442   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:09:31.273462   49715 pod_ready.go:81] duration metric: took 5.488135ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:31.273471   49715 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:29.466908   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.965093   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.159176   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.159463   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:31.526738   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:31.526879   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:31.539174   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.026678   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.026780   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.039078   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.527030   49036 api_server.go:166] Checking apiserver status ...
	I0213 23:09:32.527120   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:09:32.539058   49036 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:09:32.539094   49036 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0213 23:09:32.539105   49036 kubeadm.go:1135] stopping kube-system containers ...
	I0213 23:09:32.539116   49036 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0213 23:09:32.539188   49036 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:09:32.583832   49036 cri.go:89] found id: ""
	I0213 23:09:32.583931   49036 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0213 23:09:32.600343   49036 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:09:32.609666   49036 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:09:32.609744   49036 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619068   49036 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0213 23:09:32.619093   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:32.751642   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:33.784796   49036 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.03311496s)
	I0213 23:09:33.784825   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.013311   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.172539   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:34.290655   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:09:34.290759   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:34.791649   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.290908   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:35.791035   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:33.283651   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.798120   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:33.966930   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.465311   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:35.160502   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:37.163077   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:36.291009   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.791117   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:09:36.809796   49036 api_server.go:72] duration metric: took 2.519141205s to wait for apiserver process to appear ...
	I0213 23:09:36.809851   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:09:36.809880   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:38.282180   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.282368   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:38.466126   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:40.967293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.811101   49036 api_server.go:269] stopped: https://192.168.50.36:8443/healthz: Get "https://192.168.50.36:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0213 23:09:41.811184   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.485465   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.485495   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.485516   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.539632   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0213 23:09:42.539667   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0213 23:09:42.809967   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:42.823007   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:42.823043   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.310359   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.318326   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0213 23:09:43.318384   49036 api_server.go:103] status: https://192.168.50.36:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0213 23:09:43.809942   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:09:43.816666   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:09:43.824593   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:09:43.824622   49036 api_server.go:131] duration metric: took 7.014763564s to wait for apiserver health ...
	I0213 23:09:43.824639   49036 cni.go:84] Creating CNI manager for ""
	I0213 23:09:43.824647   49036 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:09:43.826660   49036 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:09:39.659667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:41.660321   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.664984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.827993   49036 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:09:43.837268   49036 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:09:43.855659   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:09:43.864719   49036 system_pods.go:59] 7 kube-system pods found
	I0213 23:09:43.864756   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:09:43.864764   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:09:43.864770   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:09:43.864778   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Pending
	I0213 23:09:43.864783   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:09:43.864789   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:09:43.864795   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:09:43.864803   49036 system_pods.go:74] duration metric: took 9.113954ms to wait for pod list to return data ...
	I0213 23:09:43.864812   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:09:43.872183   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:09:43.872222   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:09:43.872237   49036 node_conditions.go:105] duration metric: took 7.415138ms to run NodePressure ...
	I0213 23:09:43.872269   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0213 23:09:44.129786   49036 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134864   49036 kubeadm.go:787] kubelet initialised
	I0213 23:09:44.134891   49036 kubeadm.go:788] duration metric: took 5.071047ms waiting for restarted kubelet to initialise ...
	I0213 23:09:44.134901   49036 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:44.139027   49036 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.143942   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143967   49036 pod_ready.go:81] duration metric: took 4.910454ms waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.143978   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.143986   49036 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.147838   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147923   49036 pod_ready.go:81] duration metric: took 3.927311ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.147935   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "etcd-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.147944   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.152465   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152490   49036 pod_ready.go:81] duration metric: took 4.536109ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.152500   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.152508   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.259273   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259309   49036 pod_ready.go:81] duration metric: took 106.789068ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.259325   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.259334   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:44.659385   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659423   49036 pod_ready.go:81] duration metric: took 400.079528ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:44.659436   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-proxy-nj7qx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:44.659443   49036 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:45.065474   49036 pod_ready.go:97] node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065510   49036 pod_ready.go:81] duration metric: took 406.055078ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	E0213 23:09:45.065524   49036 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-245122" hosting pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:45.065533   49036 pod_ready.go:38] duration metric: took 930.621868ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:45.065555   49036 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:09:45.100009   49036 ops.go:34] apiserver oom_adj: -16
	I0213 23:09:45.100037   49036 kubeadm.go:640] restartCluster took 22.599099367s
	I0213 23:09:45.100049   49036 kubeadm.go:406] StartCluster complete in 22.6699561s
	I0213 23:09:45.100070   49036 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.100156   49036 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:09:45.103031   49036 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:09:45.103315   49036 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:09:45.103447   49036 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:09:45.103540   49036 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103562   49036 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-245122"
	I0213 23:09:45.103571   49036 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103593   49036 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:45.103603   49036 config.go:182] Loaded profile config "old-k8s-version-245122": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0213 23:09:45.103638   49036 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-245122"
	I0213 23:09:45.103693   49036 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-245122"
	W0213 23:09:45.103608   49036 addons.go:243] addon metrics-server should already be in state true
	W0213 23:09:45.103577   49036 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:09:45.103879   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104144   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104215   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104227   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.104318   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.103829   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.104877   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.104904   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.123332   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43213
	I0213 23:09:45.123486   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0213 23:09:45.123555   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0213 23:09:45.123964   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124143   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124148   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.124449   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124469   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124650   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124674   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124654   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.124743   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.124965   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125030   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125083   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.125471   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.125564   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125567   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.125598   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.125612   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.129046   49036 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-245122"
	W0213 23:09:45.129065   49036 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:09:45.129085   49036 host.go:66] Checking if "old-k8s-version-245122" exists ...
	I0213 23:09:45.129385   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.129415   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.145900   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0213 23:09:45.146570   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.147144   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.147164   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.147448   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.147635   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.156023   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.158533   49036 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:09:45.159815   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:09:45.159837   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:09:45.159862   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.163799   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164445   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.164472   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.164859   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.165112   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.165340   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.165523   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.166097   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I0213 23:09:45.166513   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.167086   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.167111   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.167442   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.167623   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.168284   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40593
	I0213 23:09:45.168855   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.169453   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.169471   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.169702   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.169992   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.171532   49036 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:09:45.170687   49036 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:09:45.172965   49036 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.172979   49036 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:09:45.172983   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:09:45.173009   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.176733   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177198   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.177232   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.177269   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.177506   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.177675   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.177885   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.190339   49036 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0213 23:09:45.190750   49036 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:09:45.191239   49036 main.go:141] libmachine: Using API Version  1
	I0213 23:09:45.191267   49036 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:09:45.191609   49036 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:09:45.191803   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetState
	I0213 23:09:45.193470   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .DriverName
	I0213 23:09:45.193730   49036 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.193748   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:09:45.193769   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHHostname
	I0213 23:09:45.196896   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197422   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:86:ab", ip: ""} in network mk-old-k8s-version-245122: {Iface:virbr4 ExpiryTime:2024-02-14 00:09:03 +0000 UTC Type:0 Mac:52:54:00:13:86:ab Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:old-k8s-version-245122 Clientid:01:52:54:00:13:86:ab}
	I0213 23:09:45.197459   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | domain old-k8s-version-245122 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:86:ab in network mk-old-k8s-version-245122
	I0213 23:09:45.197745   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHPort
	I0213 23:09:45.197935   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHKeyPath
	I0213 23:09:45.198191   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .GetSSHUsername
	I0213 23:09:45.198301   49036 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/old-k8s-version-245122/id_rsa Username:docker}
	I0213 23:09:45.392787   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:09:45.392808   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:09:45.426298   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:09:45.440984   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:09:45.452209   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:09:45.452239   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:09:45.531203   49036 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:45.531226   49036 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:09:45.593779   49036 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0213 23:09:45.621016   49036 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-245122" context rescaled to 1 replicas
	I0213 23:09:45.621056   49036 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:09:45.623081   49036 out.go:177] * Verifying Kubernetes components...
	I0213 23:09:45.624623   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:09:45.631546   49036 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:09:46.116692   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116732   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.116735   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.116736   49036 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:46.116754   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117125   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117172   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117183   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117192   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117201   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117203   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117218   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117228   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.117247   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.117667   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117671   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.117708   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117728   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.117962   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.117980   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140111   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.140133   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.140411   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.140441   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.140431   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.228877   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.228908   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229250   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229273   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229273   49036 main.go:141] libmachine: (old-k8s-version-245122) DBG | Closing plugin on server side
	I0213 23:09:46.229283   49036 main.go:141] libmachine: Making call to close driver server
	I0213 23:09:46.229293   49036 main.go:141] libmachine: (old-k8s-version-245122) Calling .Close
	I0213 23:09:46.229523   49036 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:09:46.229538   49036 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:09:46.229558   49036 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-245122"
	I0213 23:09:46.231176   49036 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:09:46.232329   49036 addons.go:505] enable addons completed in 1.128872958s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:09:42.783163   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:44.783634   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.281934   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:43.465665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:45.964909   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:46.160084   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.664267   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:48.120153   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:50.120636   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:49.781808   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.281392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:47.968701   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:50.465488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:51.161059   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:53.662099   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.121578   49036 node_ready.go:58] node "old-k8s-version-245122" has status "Ready":"False"
	I0213 23:09:53.120859   49036 node_ready.go:49] node "old-k8s-version-245122" has status "Ready":"True"
	I0213 23:09:53.120885   49036 node_ready.go:38] duration metric: took 7.004121529s waiting for node "old-k8s-version-245122" to be "Ready" ...
	I0213 23:09:53.120896   49036 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:09:53.129174   49036 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:09:55.136200   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.283011   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.286197   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:52.964530   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:54.964679   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.966183   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:56.159475   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.160233   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:57.636373   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.137616   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:58.782611   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:09:59.465313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:01.465877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:00.660202   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.159244   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:02.635052   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:04.636231   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.284083   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.781701   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:03.966234   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.465225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:05.160136   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.160817   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.161703   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:06.636789   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.135398   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.135441   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:07.782000   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:09.782948   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.785161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:08.465688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:10.967225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:11.658937   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.661460   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.138346   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.636437   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:14.282538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.781339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:13.465521   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:15.965224   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:16.162065   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:18.658525   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.648838   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.137226   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:19.282514   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:21.781917   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:17.966716   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.464644   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.465071   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:20.659514   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.662481   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:22.636371   49036 pod_ready.go:102] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.136197   49036 pod_ready.go:92] pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.136234   49036 pod_ready.go:81] duration metric: took 31.007029263s waiting for pod "coredns-5644d7b6d9-kr6t9" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.136249   49036 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142089   49036 pod_ready.go:92] pod "etcd-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.142114   49036 pod_ready.go:81] duration metric: took 5.854061ms waiting for pod "etcd-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.142127   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149372   49036 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.149396   49036 pod_ready.go:81] duration metric: took 7.261015ms waiting for pod "kube-apiserver-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.149409   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158342   49036 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.158371   49036 pod_ready.go:81] duration metric: took 8.953577ms waiting for pod "kube-controller-manager-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.158384   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165154   49036 pod_ready.go:92] pod "kube-proxy-nj7qx" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.165177   49036 pod_ready.go:81] duration metric: took 6.785683ms waiting for pod "kube-proxy-nj7qx" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.165186   49036 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533838   49036 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace has status "Ready":"True"
	I0213 23:10:24.533863   49036 pod_ready.go:81] duration metric: took 368.670292ms waiting for pod "kube-scheduler-old-k8s-version-245122" in "kube-system" namespace to be "Ready" ...
	I0213 23:10:24.533896   49036 pod_ready.go:38] duration metric: took 31.412988042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:10:24.533912   49036 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:10:24.534007   49036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:10:24.549186   49036 api_server.go:72] duration metric: took 38.928101792s to wait for apiserver process to appear ...
	I0213 23:10:24.549217   49036 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:10:24.549238   49036 api_server.go:253] Checking apiserver healthz at https://192.168.50.36:8443/healthz ...
	I0213 23:10:24.557366   49036 api_server.go:279] https://192.168.50.36:8443/healthz returned 200:
	ok
	I0213 23:10:24.558364   49036 api_server.go:141] control plane version: v1.16.0
	I0213 23:10:24.558387   49036 api_server.go:131] duration metric: took 9.165129ms to wait for apiserver health ...
	I0213 23:10:24.558396   49036 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:10:24.736365   49036 system_pods.go:59] 8 kube-system pods found
	I0213 23:10:24.736396   49036 system_pods.go:61] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:24.736401   49036 system_pods.go:61] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:24.736405   49036 system_pods.go:61] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:24.736409   49036 system_pods.go:61] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:24.736413   49036 system_pods.go:61] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:24.736417   49036 system_pods.go:61] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:24.736423   49036 system_pods.go:61] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:24.736429   49036 system_pods.go:61] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:24.736437   49036 system_pods.go:74] duration metric: took 178.035411ms to wait for pod list to return data ...
	I0213 23:10:24.736444   49036 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:10:24.934360   49036 default_sa.go:45] found service account: "default"
	I0213 23:10:24.934390   49036 default_sa.go:55] duration metric: took 197.940334ms for default service account to be created ...
	I0213 23:10:24.934400   49036 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:10:25.135904   49036 system_pods.go:86] 8 kube-system pods found
	I0213 23:10:25.135933   49036 system_pods.go:89] "coredns-5644d7b6d9-kr6t9" [0c060820-1e79-4e3e-92d8-ec77f75741c4] Running
	I0213 23:10:25.135940   49036 system_pods.go:89] "etcd-old-k8s-version-245122" [9e738f3c-e5a3-4cf1-b0a4-9b264c10498f] Running
	I0213 23:10:25.135944   49036 system_pods.go:89] "kube-apiserver-old-k8s-version-245122" [a20e5e9b-ae3e-4a66-874d-725c92bafc8d] Running
	I0213 23:10:25.135949   49036 system_pods.go:89] "kube-controller-manager-old-k8s-version-245122" [25f2a999-b978-4105-98d0-84aa2b0866a1] Running
	I0213 23:10:25.135954   49036 system_pods.go:89] "kube-proxy-nj7qx" [4efb1b13-7f14-49bd-aacf-600b7733cbe0] Running
	I0213 23:10:25.135959   49036 system_pods.go:89] "kube-scheduler-old-k8s-version-245122" [edcc4d60-ee35-43d6-8656-e42b356d4898] Running
	I0213 23:10:25.135967   49036 system_pods.go:89] "metrics-server-74d5856cc6-c6rp6" [cfb3f364-5eee-45a0-bd22-88d1efaefee3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:10:25.135973   49036 system_pods.go:89] "storage-provisioner" [e3977149-1877-4180-b568-72c5ae81788f] Running
	I0213 23:10:25.135982   49036 system_pods.go:126] duration metric: took 201.576732ms to wait for k8s-apps to be running ...
	I0213 23:10:25.135992   49036 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:10:25.136035   49036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:10:25.151540   49036 system_svc.go:56] duration metric: took 15.53628ms WaitForService to wait for kubelet.
	I0213 23:10:25.151582   49036 kubeadm.go:581] duration metric: took 39.530502672s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:10:25.151608   49036 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:10:25.333026   49036 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:10:25.333067   49036 node_conditions.go:123] node cpu capacity is 2
	I0213 23:10:25.333083   49036 node_conditions.go:105] duration metric: took 181.468311ms to run NodePressure ...
	I0213 23:10:25.333171   49036 start.go:228] waiting for startup goroutines ...
	I0213 23:10:25.333186   49036 start.go:233] waiting for cluster config update ...
	I0213 23:10:25.333200   49036 start.go:242] writing updated cluster config ...
	I0213 23:10:25.333540   49036 ssh_runner.go:195] Run: rm -f paused
	I0213 23:10:25.385974   49036 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0213 23:10:25.388225   49036 out.go:177] 
	W0213 23:10:25.389965   49036 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0213 23:10:25.391288   49036 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0213 23:10:25.392550   49036 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-245122" cluster and "default" namespace by default
	I0213 23:10:24.281840   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.782341   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:24.467427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:26.965363   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:25.158811   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:27.158903   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.162245   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.283592   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.781156   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:29.465534   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.965570   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:31.163299   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.664184   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:34.281475   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.282050   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:33.966548   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.465588   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:36.159425   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.161056   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.781806   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.782565   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:38.465618   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.966613   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:40.659031   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.660105   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:43.282453   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.782436   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:42.967065   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.465277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:45.161783   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.659092   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:48.281903   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:50.782326   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:47.965978   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.972688   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:52.464489   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:49.661150   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:51.661183   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.159746   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:53.280877   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:55.281432   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:54.465386   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.966020   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:56.659863   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.161127   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:57.781250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:00.283244   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:10:59.464959   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.466871   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:01.660636   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:04.162081   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:02.782971   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.282593   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:03.964986   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:05.967545   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:06.660761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.663916   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:07.783437   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.280975   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.281595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:08.466954   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:10.965354   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:11.159761   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:13.160656   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:14.281819   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:16.781331   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:12.965830   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.464980   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:15.659894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.659996   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:18.782849   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.281343   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:17.965490   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.965841   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:22.465427   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:19.660194   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:21.660348   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.158929   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:23.281731   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:25.282299   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:24.966008   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.463392   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:26.160687   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:28.160792   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:27.783770   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.282652   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:29.464941   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:31.965436   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:30.160850   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.661971   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:32.781595   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.282110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:33.966260   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:36.465148   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:35.160093   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.160571   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:37.782870   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.281536   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:38.466898   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:40.965121   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:39.659930   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.160848   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.782134   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.287871   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:42.966494   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:45.465485   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:47.477988   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:44.659259   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:46.660566   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.165414   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.781501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.282150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:49.965827   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:52.465337   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:51.658915   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.160444   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.286142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.783072   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:54.465900   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.466029   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:56.659103   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.660419   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.784481   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.282749   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:11:58.965179   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:01.465662   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:00.661165   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.161035   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.787946   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:06.281932   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:03.964460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.966240   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:05.660384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.159544   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.781709   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.782556   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:08.465300   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.472665   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:10.660651   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.159097   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:13.281500   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.781953   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:12.965510   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:14.966435   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.465559   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:15.160583   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.659605   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:17.784167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:20.280384   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:22.282494   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.468825   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.965088   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:19.659644   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:21.662561   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.160923   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:24.781351   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:27.281938   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:23.966646   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.465094   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:26.160986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.161300   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:29.780690   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.282298   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:28.965450   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:31.467937   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:30.659169   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:32.659681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.782495   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.782679   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:33.965594   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.465409   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:34.660174   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:36.660802   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.160838   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:39.281205   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.281734   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:38.465702   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:40.965477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:41.659732   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:44.159873   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:43.780979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.781438   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:42.966342   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:45.464993   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.465742   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:46.162330   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:48.659964   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:47.782513   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:50.281255   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:52.281345   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:49.967402   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.968499   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:51.161451   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:53.659594   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.782653   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.782779   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:54.465429   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:56.466199   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:55.659986   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:57.661028   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:59.280842   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.281110   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:12:58.965410   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:00.966316   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:01.458755   49120 pod_ready.go:81] duration metric: took 4m0.00109163s waiting for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:01.458812   49120 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-r44rm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:01.458839   49120 pod_ready.go:38] duration metric: took 4m13.051566827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:01.458873   49120 kubeadm.go:640] restartCluster took 4m33.496925279s
	W0213 23:13:01.458967   49120 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:01.459008   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:00.160188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:02.663549   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:03.285939   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.782469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:05.165196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:07.661417   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:08.283394   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.286257   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.161461   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:10.652828   49443 pod_ready.go:81] duration metric: took 4m0.001101625s waiting for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:10.652857   49443 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-lmcwv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:10.652877   49443 pod_ready.go:38] duration metric: took 4m11.564476633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:10.652905   49443 kubeadm.go:640] restartCluster took 4m34.344806193s
	W0213 23:13:10.652970   49443 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:10.652997   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:12.782042   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:15.282782   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:16.418651   49120 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.959611919s)
	I0213 23:13:16.418750   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:16.435137   49120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:16.448436   49120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:16.459777   49120 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:16.459826   49120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:16.708111   49120 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:17.782474   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:20.283238   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:22.782418   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:24.782894   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:26.784203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:28.667785   49120 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0213 23:13:28.667865   49120 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:28.668000   49120 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:28.668151   49120 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:28.668282   49120 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:28.668372   49120 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:28.670147   49120 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:28.670266   49120 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:28.670367   49120 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:28.670480   49120 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:28.670559   49120 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:28.670674   49120 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:28.670763   49120 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:28.670864   49120 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:28.670964   49120 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:28.671068   49120 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:28.671163   49120 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:28.671221   49120 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:28.671296   49120 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:28.671368   49120 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:28.671440   49120 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0213 23:13:28.671506   49120 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:28.671580   49120 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:28.671658   49120 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:28.671734   49120 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:28.671791   49120 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:28.673351   49120 out.go:204]   - Booting up control plane ...
	I0213 23:13:28.673448   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:28.673535   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:28.673627   49120 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:28.673744   49120 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:28.673846   49120 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:28.673903   49120 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:28.674084   49120 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:28.674176   49120 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.010705 seconds
	I0213 23:13:28.674315   49120 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:28.674470   49120 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:28.674543   49120 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:28.674766   49120 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-778731 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:28.674832   49120 kubeadm.go:322] [bootstrap-token] Using token: dwjaqi.e4fr4bxqfdq63m9e
	I0213 23:13:28.676266   49120 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:28.676392   49120 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:28.676495   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:28.676671   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:28.676871   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:28.677028   49120 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:28.677142   49120 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:28.677283   49120 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:28.677337   49120 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:28.677392   49120 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:28.677405   49120 kubeadm.go:322] 
	I0213 23:13:28.677476   49120 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:28.677488   49120 kubeadm.go:322] 
	I0213 23:13:28.677586   49120 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:28.677599   49120 kubeadm.go:322] 
	I0213 23:13:28.677631   49120 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:28.677712   49120 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:28.677780   49120 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:28.677793   49120 kubeadm.go:322] 
	I0213 23:13:28.677864   49120 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:28.677881   49120 kubeadm.go:322] 
	I0213 23:13:28.677941   49120 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:28.677948   49120 kubeadm.go:322] 
	I0213 23:13:28.678019   49120 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:28.678125   49120 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:28.678215   49120 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:28.678223   49120 kubeadm.go:322] 
	I0213 23:13:28.678324   49120 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:28.678426   49120 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:28.678433   49120 kubeadm.go:322] 
	I0213 23:13:28.678544   49120 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.678685   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:28.678714   49120 kubeadm.go:322] 	--control-plane 
	I0213 23:13:28.678722   49120 kubeadm.go:322] 
	I0213 23:13:28.678834   49120 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:28.678841   49120 kubeadm.go:322] 
	I0213 23:13:28.678950   49120 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token dwjaqi.e4fr4bxqfdq63m9e \
	I0213 23:13:28.679094   49120 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:28.679106   49120 cni.go:84] Creating CNI manager for ""
	I0213 23:13:28.679116   49120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:28.680826   49120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:25.241610   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.588591305s)
	I0213 23:13:25.241679   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:25.257221   49443 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:25.271651   49443 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:25.285556   49443 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:25.285615   49443 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:25.530438   49443 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:29.281713   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:31.274625   49715 pod_ready.go:81] duration metric: took 4m0.00114055s waiting for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:31.274654   49715 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-p97jh" in "kube-system" namespace to be "Ready" (will not retry!)
	I0213 23:13:31.274676   49715 pod_ready.go:38] duration metric: took 4m13.561333764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:31.274700   49715 kubeadm.go:640] restartCluster took 4m33.95094669s
	W0213 23:13:31.274766   49715 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0213 23:13:31.274807   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0213 23:13:28.682020   49120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:28.710027   49120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:28.752989   49120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:28.753118   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:28.753117   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=no-preload-778731 minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.147657   49120 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:29.147806   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:29.647920   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:30.648105   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.148819   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:31.648877   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.148622   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:32.647939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.005257   49443 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:37.005340   49443 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:37.005464   49443 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:37.005611   49443 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:37.005750   49443 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:37.005836   49443 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:37.007501   49443 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:37.007606   49443 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:37.007687   49443 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:37.007782   49443 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:37.007869   49443 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:37.007960   49443 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:37.008047   49443 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:37.008139   49443 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:37.008221   49443 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:37.008324   49443 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:37.008437   49443 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:37.008488   49443 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:37.008577   49443 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:37.008657   49443 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:37.008742   49443 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:37.008837   49443 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:37.008916   49443 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:37.009044   49443 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:37.009150   49443 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:37.010808   49443 out.go:204]   - Booting up control plane ...
	I0213 23:13:37.010943   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:37.011053   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:37.011155   49443 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:37.011537   49443 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:37.011661   49443 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:37.011720   49443 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:37.011915   49443 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:37.012024   49443 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.005842 seconds
	I0213 23:13:37.012154   49443 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:37.012297   49443 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:37.012376   49443 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:37.012595   49443 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-340656 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:37.012668   49443 kubeadm.go:322] [bootstrap-token] Using token: 0y2cx5.j4vucgv3wtut6xkw
	I0213 23:13:37.014296   49443 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:37.014433   49443 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:37.014535   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:37.014697   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:37.014837   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:37.014966   49443 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:37.015073   49443 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:37.015203   49443 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:37.015256   49443 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:37.015316   49443 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:37.015326   49443 kubeadm.go:322] 
	I0213 23:13:37.015393   49443 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:37.015403   49443 kubeadm.go:322] 
	I0213 23:13:37.015500   49443 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:37.015511   49443 kubeadm.go:322] 
	I0213 23:13:37.015535   49443 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:37.015603   49443 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:37.015668   49443 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:37.015677   49443 kubeadm.go:322] 
	I0213 23:13:37.015744   49443 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:37.015754   49443 kubeadm.go:322] 
	I0213 23:13:37.015814   49443 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:37.015824   49443 kubeadm.go:322] 
	I0213 23:13:37.015889   49443 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:37.015981   49443 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:37.016075   49443 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:37.016087   49443 kubeadm.go:322] 
	I0213 23:13:37.016182   49443 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:37.016272   49443 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:37.016282   49443 kubeadm.go:322] 
	I0213 23:13:37.016371   49443 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016486   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:37.016522   49443 kubeadm.go:322] 	--control-plane 
	I0213 23:13:37.016527   49443 kubeadm.go:322] 
	I0213 23:13:37.016637   49443 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:37.016643   49443 kubeadm.go:322] 
	I0213 23:13:37.016739   49443 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0y2cx5.j4vucgv3wtut6xkw \
	I0213 23:13:37.016875   49443 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:37.016887   49443 cni.go:84] Creating CNI manager for ""
	I0213 23:13:37.016895   49443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:37.018483   49443 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:33.148023   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:33.648861   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.147939   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:34.648160   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.148620   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:35.648710   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.148263   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:36.648202   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.148597   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.648067   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.019795   49443 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:37.080689   49443 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:37.145132   49443 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:37.145273   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.145374   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=embed-certs-340656 minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:37.195322   49443 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:37.575387   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.075523   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.575550   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.075996   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.148294   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:38.648747   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.148671   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:39.648021   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.148566   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.648799   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.148354   49120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.257502   49120 kubeadm.go:1088] duration metric: took 12.504501087s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:41.257549   49120 kubeadm.go:406] StartCluster complete in 5m13.347836612s
	I0213 23:13:41.257573   49120 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.257681   49120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:41.260299   49120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:41.260647   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:41.260677   49120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:41.260755   49120 addons.go:69] Setting storage-provisioner=true in profile "no-preload-778731"
	I0213 23:13:41.260779   49120 addons.go:234] Setting addon storage-provisioner=true in "no-preload-778731"
	W0213 23:13:41.260787   49120 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:41.260777   49120 addons.go:69] Setting metrics-server=true in profile "no-preload-778731"
	I0213 23:13:41.260807   49120 addons.go:234] Setting addon metrics-server=true in "no-preload-778731"
	W0213 23:13:41.260815   49120 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:41.260840   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260858   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.260882   49120 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:13:41.261207   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261227   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261267   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261291   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.261426   49120 addons.go:69] Setting default-storageclass=true in profile "no-preload-778731"
	I0213 23:13:41.261447   49120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-778731"
	I0213 23:13:41.261807   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.261899   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.278449   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0213 23:13:41.278646   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0213 23:13:41.278874   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.278992   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.279367   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279389   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279460   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.279485   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.279748   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.279929   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.280301   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280345   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280389   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.280403   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0213 23:13:41.280420   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.280729   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.281302   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.281324   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.281723   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.281932   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.286017   49120 addons.go:234] Setting addon default-storageclass=true in "no-preload-778731"
	W0213 23:13:41.286039   49120 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:41.286067   49120 host.go:66] Checking if "no-preload-778731" exists ...
	I0213 23:13:41.286476   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.286511   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.299018   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0213 23:13:41.299266   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0213 23:13:41.299626   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.299951   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.300111   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300127   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300624   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.300656   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.300707   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.300885   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.301280   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.301628   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.303270   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.304846   49120 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:41.303809   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.306034   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:41.306048   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:41.306068   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.307731   49120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:41.309028   49120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.309045   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:41.309065   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.309214   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0213 23:13:41.309635   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.309722   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310208   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.310227   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.310342   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.310379   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.310514   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.310731   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.310877   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.310900   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.311093   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.311466   49120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:41.311516   49120 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:41.312194   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312559   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.312580   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.312814   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.313006   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.313140   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.313283   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.327021   49120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
	I0213 23:13:41.327605   49120 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:41.328038   49120 main.go:141] libmachine: Using API Version  1
	I0213 23:13:41.328055   49120 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:41.328399   49120 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:41.328596   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetState
	I0213 23:13:41.330082   49120 main.go:141] libmachine: (no-preload-778731) Calling .DriverName
	I0213 23:13:41.330333   49120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.330344   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:41.330356   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHHostname
	I0213 23:13:41.333321   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333703   49120 main.go:141] libmachine: (no-preload-778731) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:3b:82", ip: ""} in network mk-no-preload-778731: {Iface:virbr1 ExpiryTime:2024-02-14 00:08:01 +0000 UTC Type:0 Mac:52:54:00:74:3b:82 Iaid: IPaddr:192.168.83.31 Prefix:24 Hostname:no-preload-778731 Clientid:01:52:54:00:74:3b:82}
	I0213 23:13:41.333731   49120 main.go:141] libmachine: (no-preload-778731) DBG | domain no-preload-778731 has defined IP address 192.168.83.31 and MAC address 52:54:00:74:3b:82 in network mk-no-preload-778731
	I0213 23:13:41.333899   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHPort
	I0213 23:13:41.334075   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHKeyPath
	I0213 23:13:41.334494   49120 main.go:141] libmachine: (no-preload-778731) Calling .GetSSHUsername
	I0213 23:13:41.334643   49120 sshutil.go:53] new ssh client: &{IP:192.168.83.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/no-preload-778731/id_rsa Username:docker}
	I0213 23:13:41.502879   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:41.534876   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:41.534908   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:41.587429   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:41.589619   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:41.616755   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:41.616783   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:41.688015   49120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.688039   49120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:41.777647   49120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:41.844418   49120 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-778731" context rescaled to 1 replicas
	I0213 23:13:41.844460   49120 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.83.31 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:41.847252   49120 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:41.848614   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:42.311509   49120 start.go:929] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:42.915046   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.327574246s)
	I0213 23:13:42.915112   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915127   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915219   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.325575731s)
	I0213 23:13:42.915241   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915250   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.915430   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.915467   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.915475   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.915485   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.915493   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917607   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917640   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917673   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917652   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:42.917719   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.917730   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.917764   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.917773   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.917996   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.918014   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.963310   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.963336   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.963632   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.963652   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999467   49120 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.150816624s)
	I0213 23:13:42.999513   49120 node_ready.go:35] waiting up to 6m0s for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:42.999542   49120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.221849263s)
	I0213 23:13:42.999604   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999620   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:42.999914   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:42.999932   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:42.999944   49120 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:42.999953   49120 main.go:141] libmachine: (no-preload-778731) Calling .Close
	I0213 23:13:43.000322   49120 main.go:141] libmachine: (no-preload-778731) DBG | Closing plugin on server side
	I0213 23:13:43.000341   49120 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:43.000355   49120 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:43.000372   49120 addons.go:470] Verifying addon metrics-server=true in "no-preload-778731"
	I0213 23:13:43.003022   49120 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:39.575883   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.076191   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:40.575969   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.075959   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:41.576297   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.075511   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:42.575528   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.076112   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:43.575825   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:44.076340   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.156104   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.881268834s)
	I0213 23:13:46.156183   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:46.173816   49715 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:13:46.185578   49715 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:13:46.196865   49715 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:13:46.196911   49715 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:13:46.251785   49715 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:13:46.251863   49715 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:13:46.416331   49715 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:13:46.416503   49715 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:13:46.416643   49715 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:13:46.690351   49715 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:13:46.692352   49715 out.go:204]   - Generating certificates and keys ...
	I0213 23:13:46.692470   49715 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:13:46.692583   49715 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:13:46.692710   49715 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0213 23:13:46.692812   49715 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0213 23:13:46.692929   49715 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0213 23:13:46.693027   49715 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0213 23:13:46.693116   49715 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0213 23:13:46.693220   49715 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0213 23:13:46.693322   49715 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0213 23:13:46.693423   49715 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0213 23:13:46.693480   49715 kubeadm.go:322] [certs] Using the existing "sa" key
	I0213 23:13:46.693559   49715 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:13:46.919270   49715 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:13:47.096236   49715 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:13:47.207058   49715 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:13:47.262083   49715 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:13:47.262614   49715 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:13:47.265288   49715 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:13:47.267143   49715 out.go:204]   - Booting up control plane ...
	I0213 23:13:47.267277   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:13:47.267383   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:13:47.267570   49715 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:13:47.284718   49715 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:13:47.286027   49715 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:13:47.286152   49715 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:13:47.443974   49715 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:13:43.004170   49120 addons.go:505] enable addons completed in 1.743494195s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:43.030538   49120 node_ready.go:49] node "no-preload-778731" has status "Ready":"True"
	I0213 23:13:43.030566   49120 node_ready.go:38] duration metric: took 31.039482ms waiting for node "no-preload-778731" to be "Ready" ...
	I0213 23:13:43.030581   49120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:43.041854   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:43.085259   49120 pod_ready.go:97] pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085310   49120 pod_ready.go:81] duration metric: took 43.414984ms waiting for pod "coredns-76f75df574-6lfc8" in "kube-system" namespace to be "Ready" ...
	E0213 23:13:43.085328   49120 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-76f75df574-6lfc8" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-02-13 23:13:41 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.83.31 HostIPs:[{IP:192.168.83.31}] PodIP: PodIPs:[] StartTime:2024-02-13 23:13:41 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0xc00397e07a AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0213 23:13:43.085337   49120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094656   49120 pod_ready.go:92] pod "coredns-76f75df574-f4g5w" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.094686   49120 pod_ready.go:81] duration metric: took 2.009341273s waiting for pod "coredns-76f75df574-f4g5w" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.094696   49120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101331   49120 pod_ready.go:92] pod "etcd-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.101352   49120 pod_ready.go:81] duration metric: took 6.650644ms waiting for pod "etcd-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.101362   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108662   49120 pod_ready.go:92] pod "kube-apiserver-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.108686   49120 pod_ready.go:81] duration metric: took 7.317621ms waiting for pod "kube-apiserver-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.108695   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115600   49120 pod_ready.go:92] pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.115620   49120 pod_ready.go:81] duration metric: took 6.918739ms waiting for pod "kube-controller-manager-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.115629   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403942   49120 pod_ready.go:92] pod "kube-proxy-7vcqq" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.403977   49120 pod_ready.go:81] duration metric: took 288.33703ms waiting for pod "kube-proxy-7vcqq" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.403990   49120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804609   49120 pod_ready.go:92] pod "kube-scheduler-no-preload-778731" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:45.804646   49120 pod_ready.go:81] duration metric: took 400.646621ms waiting for pod "kube-scheduler-no-preload-778731" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:45.804661   49120 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:44.575423   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.076435   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:45.575498   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.076393   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:46.575716   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.075439   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:47.575623   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.076149   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.575619   49443 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:48.757507   49443 kubeadm.go:1088] duration metric: took 11.612278698s to wait for elevateKubeSystemPrivileges.
	I0213 23:13:48.757567   49443 kubeadm.go:406] StartCluster complete in 5m12.504615736s
	I0213 23:13:48.757592   49443 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.757689   49443 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:13:48.760402   49443 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:13:48.760794   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:13:48.761145   49443 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:13:48.761320   49443 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:13:48.761392   49443 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-340656"
	I0213 23:13:48.761411   49443 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-340656"
	W0213 23:13:48.761420   49443 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:13:48.761470   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762064   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762094   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762173   49443 addons.go:69] Setting default-storageclass=true in profile "embed-certs-340656"
	I0213 23:13:48.762208   49443 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-340656"
	I0213 23:13:48.762334   49443 addons.go:69] Setting metrics-server=true in profile "embed-certs-340656"
	I0213 23:13:48.762359   49443 addons.go:234] Setting addon metrics-server=true in "embed-certs-340656"
	W0213 23:13:48.762368   49443 addons.go:243] addon metrics-server should already be in state true
	I0213 23:13:48.762418   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.762605   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762642   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.762770   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.762812   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.782845   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0213 23:13:48.782988   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0213 23:13:48.782993   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0213 23:13:48.783453   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783578   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.783583   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.784018   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784038   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784160   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784177   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784197   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.784211   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.784431   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784636   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.784704   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.784781   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.785241   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785264   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.785910   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.785952   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.795703   49443 addons.go:234] Setting addon default-storageclass=true in "embed-certs-340656"
	W0213 23:13:48.795803   49443 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:13:48.795847   49443 host.go:66] Checking if "embed-certs-340656" exists ...
	I0213 23:13:48.796295   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.796352   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.805562   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40063
	I0213 23:13:48.806234   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.815444   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0213 23:13:48.815451   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.815558   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.817565   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.817770   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.818164   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.818796   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.818815   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.819308   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I0213 23:13:48.819537   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.819661   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.819723   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.821798   49443 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:13:48.820119   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.821685   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.823106   49443 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:48.823122   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:13:48.823142   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.824803   49443 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:13:48.826431   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.826467   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:13:48.826487   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:13:48.826507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.826393   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.826536   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.827054   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.827129   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.827155   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.827617   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.828067   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.828089   49443 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:13:48.828119   49443 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:13:48.828335   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.828539   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.830417   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.831572   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.831604   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.832609   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.832827   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.832999   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.833165   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:48.851188   49443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0213 23:13:48.851868   49443 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:13:48.852446   49443 main.go:141] libmachine: Using API Version  1
	I0213 23:13:48.852482   49443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:13:48.852913   49443 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:13:48.853134   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetState
	I0213 23:13:48.855360   49443 main.go:141] libmachine: (embed-certs-340656) Calling .DriverName
	I0213 23:13:48.855766   49443 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:48.855792   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:13:48.855810   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHHostname
	I0213 23:13:48.859610   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.859877   49443 main.go:141] libmachine: (embed-certs-340656) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:e3:24", ip: ""} in network mk-embed-certs-340656: {Iface:virbr2 ExpiryTime:2024-02-14 00:08:21 +0000 UTC Type:0 Mac:52:54:00:72:e3:24 Iaid: IPaddr:192.168.61.56 Prefix:24 Hostname:embed-certs-340656 Clientid:01:52:54:00:72:e3:24}
	I0213 23:13:48.859915   49443 main.go:141] libmachine: (embed-certs-340656) DBG | domain embed-certs-340656 has defined IP address 192.168.61.56 and MAC address 52:54:00:72:e3:24 in network mk-embed-certs-340656
	I0213 23:13:48.860263   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHPort
	I0213 23:13:48.860507   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHKeyPath
	I0213 23:13:48.860699   49443 main.go:141] libmachine: (embed-certs-340656) Calling .GetSSHUsername
	I0213 23:13:48.860854   49443 sshutil.go:53] new ssh client: &{IP:192.168.61.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/embed-certs-340656/id_rsa Username:docker}
	I0213 23:13:49.015561   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:13:49.019336   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:13:49.047556   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:13:49.047593   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:13:49.083994   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:13:49.109749   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:13:49.109778   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:13:49.196430   49443 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.196459   49443 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:13:49.297603   49443 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:13:49.306053   49443 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-340656" context rescaled to 1 replicas
	I0213 23:13:49.306112   49443 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.56 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:13:49.307559   49443 out.go:177] * Verifying Kubernetes components...
	I0213 23:13:49.308883   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:13:51.125630   49443 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.109969214s)
	I0213 23:13:51.125663   49443 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0213 23:13:51.492579   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.473198087s)
	I0213 23:13:51.492655   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492672   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492587   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.408541587s)
	I0213 23:13:51.492794   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.492820   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.492955   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493027   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493041   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493052   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493061   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493362   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493392   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493401   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493458   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.493492   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493501   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.493511   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.493520   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.493768   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.493791   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.550911   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.550944   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.551267   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.551319   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.728993   49443 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.420033663s)
	I0213 23:13:51.729078   49443 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.431431547s)
	I0213 23:13:51.729114   49443 node_ready.go:35] waiting up to 6m0s for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.729135   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729163   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729446   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729462   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729473   49443 main.go:141] libmachine: Making call to close driver server
	I0213 23:13:51.729483   49443 main.go:141] libmachine: (embed-certs-340656) Calling .Close
	I0213 23:13:51.729770   49443 main.go:141] libmachine: (embed-certs-340656) DBG | Closing plugin on server side
	I0213 23:13:51.729803   49443 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:13:51.729813   49443 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:13:51.729823   49443 addons.go:470] Verifying addon metrics-server=true in "embed-certs-340656"
	I0213 23:13:51.732785   49443 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:13:47.812862   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:49.820823   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:52.318873   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:51.733634   49443 addons.go:505] enable addons completed in 2.972313278s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:13:51.741252   49443 node_ready.go:49] node "embed-certs-340656" has status "Ready":"True"
	I0213 23:13:51.741279   49443 node_ready.go:38] duration metric: took 12.133263ms waiting for node "embed-certs-340656" to be "Ready" ...
	I0213 23:13:51.741290   49443 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:13:51.749409   49443 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766298   49443 pod_ready.go:92] pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.766331   49443 pod_ready.go:81] duration metric: took 1.01688514s waiting for pod "coredns-5dd5756b68-vrbjt" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.766345   49443 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777697   49443 pod_ready.go:92] pod "etcd-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.777725   49443 pod_ready.go:81] duration metric: took 11.371663ms waiting for pod "etcd-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.777738   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789006   49443 pod_ready.go:92] pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.789030   49443 pod_ready.go:81] duration metric: took 11.286651ms waiting for pod "kube-apiserver-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.789040   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798798   49443 pod_ready.go:92] pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:52.798820   49443 pod_ready.go:81] duration metric: took 9.773358ms waiting for pod "kube-controller-manager-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:52.798829   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807522   49443 pod_ready.go:92] pod "kube-proxy-4vgt5" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:53.807555   49443 pod_ready.go:81] duration metric: took 1.00871819s waiting for pod "kube-proxy-4vgt5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:53.807569   49443 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133771   49443 pod_ready.go:92] pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace has status "Ready":"True"
	I0213 23:13:54.133808   49443 pod_ready.go:81] duration metric: took 326.228368ms waiting for pod "kube-scheduler-embed-certs-340656" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:54.133819   49443 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	I0213 23:13:55.947176   49715 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502842 seconds
	I0213 23:13:55.947340   49715 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:13:55.968064   49715 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:13:56.503592   49715 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:13:56.503798   49715 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-083863 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:13:57.020246   49715 kubeadm.go:322] [bootstrap-token] Using token: 1sfxye.gyrkuj525fbtgg0g
	I0213 23:13:57.021591   49715 out.go:204]   - Configuring RBAC rules ...
	I0213 23:13:57.021724   49715 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:13:57.028718   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:13:57.038574   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:13:57.046578   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:13:57.051622   49715 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:13:57.065769   49715 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:13:57.091404   49715 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:13:57.330768   49715 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:13:57.436406   49715 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:13:57.436445   49715 kubeadm.go:322] 
	I0213 23:13:57.436542   49715 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:13:57.436556   49715 kubeadm.go:322] 
	I0213 23:13:57.436650   49715 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:13:57.436681   49715 kubeadm.go:322] 
	I0213 23:13:57.436729   49715 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:13:57.436813   49715 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:13:57.436887   49715 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:13:57.436898   49715 kubeadm.go:322] 
	I0213 23:13:57.436989   49715 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:13:57.437002   49715 kubeadm.go:322] 
	I0213 23:13:57.437067   49715 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:13:57.437078   49715 kubeadm.go:322] 
	I0213 23:13:57.437137   49715 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:13:57.437227   49715 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:13:57.437344   49715 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:13:57.437365   49715 kubeadm.go:322] 
	I0213 23:13:57.437463   49715 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:13:57.437561   49715 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:13:57.437577   49715 kubeadm.go:322] 
	I0213 23:13:57.437713   49715 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.437878   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:13:57.437915   49715 kubeadm.go:322] 	--control-plane 
	I0213 23:13:57.437925   49715 kubeadm.go:322] 
	I0213 23:13:57.438021   49715 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:13:57.438032   49715 kubeadm.go:322] 
	I0213 23:13:57.438140   49715 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 1sfxye.gyrkuj525fbtgg0g \
	I0213 23:13:57.438284   49715 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:13:57.438602   49715 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:13:57.438886   49715 cni.go:84] Creating CNI manager for ""
	I0213 23:13:57.438904   49715 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:13:57.440968   49715 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:13:57.442459   49715 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:13:57.466652   49715 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:13:57.538217   49715 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:13:57.538279   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:57.538289   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=default-k8s-diff-port-083863 minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:54.320129   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.812983   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:56.141892   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:58.143201   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:13:57.914767   49715 ops.go:34] apiserver oom_adj: -16
	I0213 23:13:57.914957   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.415274   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.915866   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.415351   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:59.915329   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.415646   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:00.915129   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.415803   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:01.915716   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:02.415378   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:13:58.815013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:01.312236   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:00.645227   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:03.145517   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:02.915447   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.415367   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.915183   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.416047   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:04.915850   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.415867   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:05.915570   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.415580   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:06.915010   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:07.415431   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:03.314560   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.817591   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:05.642499   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.644055   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:07.916067   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.415001   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:08.915359   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.415672   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:09.915997   49715 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:14:10.105267   49715 kubeadm.go:1088] duration metric: took 12.567044904s to wait for elevateKubeSystemPrivileges.
	I0213 23:14:10.105293   49715 kubeadm.go:406] StartCluster complete in 5m12.839656692s
	I0213 23:14:10.105310   49715 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.105392   49715 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:14:10.107335   49715 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:14:10.107629   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:14:10.107747   49715 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:14:10.107821   49715 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:14:10.107841   49715 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107858   49715 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107866   49715 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-083863"
	I0213 23:14:10.107873   49715 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-083863"
	W0213 23:14:10.107878   49715 addons.go:243] addon storage-provisioner should already be in state true
	I0213 23:14:10.107885   49715 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-083863"
	I0213 23:14:10.107905   49715 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.107917   49715 addons.go:243] addon metrics-server should already be in state true
	I0213 23:14:10.107941   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.107961   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.108282   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108352   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108368   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.108382   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108392   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.108355   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.124618   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44363
	I0213 23:14:10.124636   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0213 23:14:10.125154   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125261   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.125984   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.125990   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.126014   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126029   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.126422   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126501   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.126604   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.127038   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.127067   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131142   49715 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-083863"
	W0213 23:14:10.131168   49715 addons.go:243] addon default-storageclass should already be in state true
	I0213 23:14:10.131196   49715 host.go:66] Checking if "default-k8s-diff-port-083863" exists ...
	I0213 23:14:10.131628   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.131661   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.131866   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0213 23:14:10.132342   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.133024   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.133044   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.133539   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.134069   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.134119   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.145244   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0213 23:14:10.145674   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.146213   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.146233   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.146642   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.146845   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.148779   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.151227   49715 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0213 23:14:10.152983   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0213 23:14:10.153004   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0213 23:14:10.150602   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I0213 23:14:10.153029   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.154229   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.154857   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.154876   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.155560   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.156429   49715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:14:10.156476   49715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:14:10.156757   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.157450   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.157680   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.157898   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.158068   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.158211   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.159437   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0213 23:14:10.159780   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.160316   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.160328   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.160712   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.160874   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.163133   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.166002   49715 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:14:10.168221   49715 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.168239   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:14:10.168259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.172119   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172539   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.172562   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.172800   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.173447   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.173609   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.173769   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.175322   49715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0213 23:14:10.175719   49715 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:14:10.176212   49715 main.go:141] libmachine: Using API Version  1
	I0213 23:14:10.176223   49715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:14:10.176556   49715 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:14:10.176727   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetState
	I0213 23:14:10.178938   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .DriverName
	I0213 23:14:10.179149   49715 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.179163   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:14:10.179174   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHHostname
	I0213 23:14:10.182253   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.182739   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7c:77:f5", ip: ""} in network mk-default-k8s-diff-port-083863: {Iface:virbr3 ExpiryTime:2024-02-14 00:00:57 +0000 UTC Type:0 Mac:52:54:00:7c:77:f5 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:default-k8s-diff-port-083863 Clientid:01:52:54:00:7c:77:f5}
	I0213 23:14:10.182773   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | domain default-k8s-diff-port-083863 has defined IP address 192.168.39.3 and MAC address 52:54:00:7c:77:f5 in network mk-default-k8s-diff-port-083863
	I0213 23:14:10.183106   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHPort
	I0213 23:14:10.183259   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHKeyPath
	I0213 23:14:10.183425   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .GetSSHUsername
	I0213 23:14:10.183534   49715 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/default-k8s-diff-port-083863/id_rsa Username:docker}
	I0213 23:14:10.327834   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0213 23:14:10.327857   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0213 23:14:10.362507   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:14:10.405623   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0213 23:14:10.405655   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0213 23:14:10.413284   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:14:10.427964   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:14:10.459317   49715 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.459343   49715 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0213 23:14:10.552860   49715 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0213 23:14:10.687588   49715 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-083863" context rescaled to 1 replicas
	I0213 23:14:10.687640   49715 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:14:10.689888   49715 out.go:177] * Verifying Kubernetes components...
	I0213 23:14:10.691656   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:14:08.312251   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:10.313161   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.313239   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.671905   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.309368382s)
	I0213 23:14:12.671963   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.258642736s)
	I0213 23:14:12.671974   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.671999   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672005   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672008   49715 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244007691s)
	I0213 23:14:12.672048   49715 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0213 23:14:12.672013   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672319   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672385   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672358   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672414   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672428   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672440   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672391   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672502   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672511   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.672522   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.672672   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672713   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.672825   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.672842   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.672845   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.718598   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.718635   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.718899   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.718948   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.718957   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992151   49715 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.439242656s)
	I0213 23:14:12.992169   49715 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.300483548s)
	I0213 23:14:12.992204   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992208   49715 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:12.992219   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.992608   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) DBG | Closing plugin on server side
	I0213 23:14:12.992650   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.992674   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.992694   49715 main.go:141] libmachine: Making call to close driver server
	I0213 23:14:12.992706   49715 main.go:141] libmachine: (default-k8s-diff-port-083863) Calling .Close
	I0213 23:14:12.993012   49715 main.go:141] libmachine: Successfully made call to close driver server
	I0213 23:14:12.993033   49715 main.go:141] libmachine: Making call to close connection to plugin binary
	I0213 23:14:12.993082   49715 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-083863"
	I0213 23:14:12.994959   49715 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0213 23:14:10.144369   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.642284   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:12.996304   49715 addons.go:505] enable addons completed in 2.888556474s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0213 23:14:13.017331   49715 node_ready.go:49] node "default-k8s-diff-port-083863" has status "Ready":"True"
	I0213 23:14:13.017356   49715 node_ready.go:38] duration metric: took 25.135832ms waiting for node "default-k8s-diff-port-083863" to be "Ready" ...
	I0213 23:14:13.017369   49715 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:14:13.040090   49715 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047064   49715 pod_ready.go:92] pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.047105   49715 pod_ready.go:81] duration metric: took 2.006967952s waiting for pod "coredns-5dd5756b68-zfscd" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.047119   49715 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052773   49715 pod_ready.go:92] pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.052793   49715 pod_ready.go:81] duration metric: took 5.668033ms waiting for pod "etcd-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.052801   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.057989   49715 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.058012   49715 pod_ready.go:81] duration metric: took 5.204253ms waiting for pod "kube-apiserver-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.058024   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063408   49715 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.063426   49715 pod_ready.go:81] duration metric: took 5.394681ms waiting for pod "kube-controller-manager-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.063434   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068502   49715 pod_ready.go:92] pod "kube-proxy-kvz2b" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.068523   49715 pod_ready.go:81] duration metric: took 5.082168ms waiting for pod "kube-proxy-kvz2b" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.068534   49715 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445109   49715 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace has status "Ready":"True"
	I0213 23:14:15.445132   49715 pod_ready.go:81] duration metric: took 376.590631ms waiting for pod "kube-scheduler-default-k8s-diff-port-083863" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:15.445142   49715 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	I0213 23:14:17.453588   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:14.816746   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.313290   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:15.141901   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:17.641098   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.453805   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.954116   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.812763   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.814338   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:19.641389   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:21.641735   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.142168   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.455003   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.952168   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:24.312468   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.813420   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:26.641722   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.141082   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:28.954054   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:30.954647   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:29.311343   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.312249   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:31.143011   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.642102   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.452218   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.453522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.457001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:33.314313   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:35.812309   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:36.143532   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:38.640894   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:39.955206   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.456339   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:37.813776   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.314111   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:40.642572   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:43.141919   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:44.955150   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.454324   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:42.813470   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.313382   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:45.143485   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.641760   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.954167   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.453822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:47.814576   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:50.312600   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.313062   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:49.642698   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:52.141500   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.141646   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.454979   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.953279   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:54.812403   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.813413   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:56.142104   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:58.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.453692   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.952522   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:14:59.313705   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:01.813002   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:00.642441   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:02.644754   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.954032   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.453202   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:03.813780   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:06.312152   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:04.645545   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:07.142188   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.454411   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:10.953929   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:08.813133   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.315282   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:09.641331   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:11.644066   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:14.141197   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.452937   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:15.453227   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:17.455142   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:13.814488   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.312013   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:16.142256   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.641861   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:19.956449   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.454447   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:18.313100   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.315124   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:20.642516   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:23.141725   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.955277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:26.956469   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:22.813277   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:24.813332   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.313503   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:25.148206   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:27.642527   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.453659   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:31.953193   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.812921   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.311859   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:29.642812   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:32.141177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.141385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.452179   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.454250   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:34.312263   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.812360   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:36.642681   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.142639   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:38.952639   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:40.953841   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:39.311603   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.312975   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:41.640004   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.641689   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:42.954046   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.453175   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:43.812207   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:46.313761   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:45.642354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.141466   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:47.953013   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.455958   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:48.813689   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:51.312695   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:50.144359   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.145852   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:52.952203   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.960421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.455215   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:53.312858   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:55.313197   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.313493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:54.642775   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:57.142159   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.143780   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.953718   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.954907   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:15:59.813086   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:02.313743   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:01.640609   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:03.641712   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.453269   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:06.454001   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:04.813366   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.313460   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:05.642520   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:07.644309   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:08.454568   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.953538   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:09.315454   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:11.814145   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:10.142385   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.644175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:12.953619   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.452015   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.455884   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:14.311599   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:16.312822   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:15.143506   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:17.643647   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:19.952742   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:21.953464   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:18.314298   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.812863   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:20.142175   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:22.641953   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.953599   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.953715   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:23.312368   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:25.813170   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:24.642939   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:27.143008   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.452587   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.454360   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:28.314038   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:30.812058   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:29.642029   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.141959   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.142628   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.955547   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:35.453428   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.456558   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:32.813040   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:34.813607   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:37.314673   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:36.143091   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:38.147685   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.953073   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:42.452724   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:39.811843   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:41.811877   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:40.645177   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.140828   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:44.453277   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.453393   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:43.813703   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:46.312231   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:45.141859   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:47.142843   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.453508   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.456357   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:48.312293   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:50.812918   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:49.641676   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.142518   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:52.951784   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.954108   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.455497   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:53.312477   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:55.313195   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:54.642918   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.141241   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.141855   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.954832   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.455675   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:57.811554   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:16:59.813709   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:02.313752   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:01.142778   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:03.143196   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.953816   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.953967   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:04.812917   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:06.814681   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:05.644404   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:07.644824   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.455392   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.953935   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:09.312828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:11.811876   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:10.141985   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:12.642984   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.453572   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.454161   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:14.314828   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:16.813786   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:15.143013   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:17.143864   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.144089   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:18.952608   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:20.952810   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:19.312837   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.316700   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:21.641354   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:24.142975   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:22.953607   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.453091   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.454501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:23.811674   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:25.814225   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:26.640796   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:28.642684   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:29.952519   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.453137   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:27.816563   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.314052   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:30.642932   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:33.142380   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.456778   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.459583   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:32.812724   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:34.812895   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:36.813814   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:35.641888   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.144690   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.952822   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.956268   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:38.821433   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:41.313306   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:40.641240   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:42.641667   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.453378   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.953398   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:43.313457   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812493   49120 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:45.812519   49120 pod_ready.go:81] duration metric: took 4m0.007851911s waiting for pod "metrics-server-57f55c9bc5-mt6qd" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:45.812528   49120 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:45.812534   49120 pod_ready.go:38] duration metric: took 4m2.781943239s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:45.812548   49120 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:45.812574   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:45.812640   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:45.881239   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:45.881267   49120 cri.go:89] found id: ""
	I0213 23:17:45.881277   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:45.881327   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.886446   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:45.886531   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:45.926920   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:45.926947   49120 cri.go:89] found id: ""
	I0213 23:17:45.926955   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:45.927000   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.931500   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:45.931577   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:45.979081   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:45.979109   49120 cri.go:89] found id: ""
	I0213 23:17:45.979119   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:45.979174   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:45.984481   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:45.984539   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:46.035365   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.035385   49120 cri.go:89] found id: ""
	I0213 23:17:46.035392   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:46.035438   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.039634   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:46.039695   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:46.087404   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:46.087429   49120 cri.go:89] found id: ""
	I0213 23:17:46.087436   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:46.087490   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.091828   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:46.091889   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:46.133625   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:46.133651   49120 cri.go:89] found id: ""
	I0213 23:17:46.133658   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:46.133710   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.138378   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:46.138456   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:46.181018   49120 cri.go:89] found id: ""
	I0213 23:17:46.181048   49120 logs.go:276] 0 containers: []
	W0213 23:17:46.181058   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:46.181065   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:46.181141   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:46.221347   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.221374   49120 cri.go:89] found id: ""
	I0213 23:17:46.221385   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:46.221448   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:46.226298   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:46.226331   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:46.268881   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:46.268915   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:46.325183   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:46.325225   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:46.372600   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:46.372637   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:46.791381   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:46.791438   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:46.861239   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:46.861431   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:46.884969   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:46.885009   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:46.909324   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:46.909352   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:46.966664   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:46.966698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:47.030276   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:47.030321   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:47.081480   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:47.081516   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:47.238201   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:47.238238   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:47.285995   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:47.286033   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:47.332459   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332486   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:47.332566   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:47.332580   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:47.332596   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:47.332616   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:47.332622   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:44.643384   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.141032   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:47.953650   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:50.453421   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.453501   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:49.641373   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:52.142827   49443 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:54.141398   49443 pod_ready.go:81] duration metric: took 4m0.007567399s waiting for pod "metrics-server-57f55c9bc5-9vcz5" in "kube-system" namespace to be "Ready" ...
	E0213 23:17:54.141420   49443 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:17:54.141428   49443 pod_ready.go:38] duration metric: took 4m2.400127673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:17:54.141441   49443 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:17:54.141464   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:54.141506   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:54.203295   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:54.203319   49443 cri.go:89] found id: ""
	I0213 23:17:54.203329   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:54.203387   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.208671   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:54.208748   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:54.254150   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:54.254183   49443 cri.go:89] found id: ""
	I0213 23:17:54.254193   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:54.254259   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.259090   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:54.259178   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:54.309365   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:54.309385   49443 cri.go:89] found id: ""
	I0213 23:17:54.309392   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:54.309436   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.315937   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:54.316014   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:54.363796   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.363855   49443 cri.go:89] found id: ""
	I0213 23:17:54.363866   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:54.363926   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.368767   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:54.368842   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:54.417590   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:54.417620   49443 cri.go:89] found id: ""
	I0213 23:17:54.417637   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:54.417696   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.422980   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:54.423053   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:54.468990   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.469019   49443 cri.go:89] found id: ""
	I0213 23:17:54.469029   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:54.469094   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.473989   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:54.474073   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:54.524124   49443 cri.go:89] found id: ""
	I0213 23:17:54.524154   49443 logs.go:276] 0 containers: []
	W0213 23:17:54.524164   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:54.524172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:54.524239   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.953845   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.459517   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.333824   49120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:57.351216   49120 api_server.go:72] duration metric: took 4m15.50672707s to wait for apiserver process to appear ...
	I0213 23:17:57.351245   49120 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:57.351281   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:57.351340   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:57.405928   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:57.405956   49120 cri.go:89] found id: ""
	I0213 23:17:57.405963   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:17:57.406007   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.410541   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:57.410619   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:57.456843   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:57.456871   49120 cri.go:89] found id: ""
	I0213 23:17:57.456881   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:17:57.456940   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.461801   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:57.461852   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:57.504653   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.504690   49120 cri.go:89] found id: ""
	I0213 23:17:57.504702   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:17:57.504762   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.509177   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:57.509250   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:57.556672   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:57.556696   49120 cri.go:89] found id: ""
	I0213 23:17:57.556704   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:17:57.556747   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.561343   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:57.561399   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:57.606959   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:57.606994   49120 cri.go:89] found id: ""
	I0213 23:17:57.607005   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:17:57.607068   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.611356   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:57.611440   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:57.655205   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:57.655230   49120 cri.go:89] found id: ""
	I0213 23:17:57.655238   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:17:57.655284   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.659762   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:57.659850   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:57.699989   49120 cri.go:89] found id: ""
	I0213 23:17:57.700012   49120 logs.go:276] 0 containers: []
	W0213 23:17:57.700019   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:57.700028   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:57.700075   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:54.562654   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.562674   49443 cri.go:89] found id: ""
	I0213 23:17:54.562682   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:54.562745   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:54.567182   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:54.567209   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:54.666809   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:54.666847   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:54.818292   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:54.818324   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:54.878074   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:54.878108   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:54.938472   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:54.938509   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:54.985201   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:54.985235   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:54.999987   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:55.000016   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:55.058536   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:55.058573   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:55.108130   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:55.108172   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:55.154299   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:55.154327   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:55.205554   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:55.205583   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:55.615944   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:55.615987   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.179069   49443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:17:58.194968   49443 api_server.go:72] duration metric: took 4m8.888826635s to wait for apiserver process to appear ...
	I0213 23:17:58.194992   49443 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:17:58.195020   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:17:58.195067   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:17:58.245997   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.246029   49443 cri.go:89] found id: ""
	I0213 23:17:58.246038   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:17:58.246103   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.251486   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:17:58.251566   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:17:58.299878   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:58.299909   49443 cri.go:89] found id: ""
	I0213 23:17:58.299919   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:17:58.299977   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.305075   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:17:58.305139   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:17:58.352587   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:58.352617   49443 cri.go:89] found id: ""
	I0213 23:17:58.352628   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:17:58.352688   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.357493   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:17:58.357576   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:17:58.412181   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.412203   49443 cri.go:89] found id: ""
	I0213 23:17:58.412211   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:17:58.412265   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.418852   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:17:58.418931   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:17:58.470881   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.470907   49443 cri.go:89] found id: ""
	I0213 23:17:58.470916   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:17:58.470970   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.476768   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:17:58.476851   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:17:58.548272   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:58.548293   49443 cri.go:89] found id: ""
	I0213 23:17:58.548301   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:17:58.548357   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.553380   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:17:58.553452   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:17:58.599623   49443 cri.go:89] found id: ""
	I0213 23:17:58.599652   49443 logs.go:276] 0 containers: []
	W0213 23:17:58.599663   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:17:58.599669   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:17:58.599725   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:17:58.647872   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.647896   49443 cri.go:89] found id: ""
	I0213 23:17:58.647906   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:17:58.647966   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:17:58.653015   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:17:58.653041   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:17:58.707958   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:17:58.708000   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:17:58.759975   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:17:58.760015   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:17:58.814801   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:17:58.814833   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:17:58.853782   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.853814   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:59.217806   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:17:59.217854   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:59.278255   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:59.278294   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:17:59.385496   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:59.385537   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:59.953729   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:02.454016   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:17:57.740739   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:57.740774   49120 cri.go:89] found id: ""
	I0213 23:17:57.740785   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:17:57.740839   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:17:57.745140   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:57.745163   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:17:57.758556   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:17:57.758604   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:17:57.900468   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:17:57.900507   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:17:57.945665   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:17:57.945693   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:17:58.003484   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:17:58.003521   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:17:58.048797   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:17:58.048826   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:17:58.096309   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:17:58.096347   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:17:58.173795   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.173990   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.196277   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:17:58.196306   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:17:58.266087   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:17:58.266129   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:17:58.325638   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:17:58.325676   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:17:58.372711   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:17:58.372752   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:17:58.444057   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:17:58.444097   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:17:58.830470   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830511   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:17:58.830572   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:17:58.830591   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:17:58.830600   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:17:58.830610   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:17:58.830618   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:17:59.544056   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:17:59.544517   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:17:59.607033   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:17:59.607067   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:17:59.654534   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:17:59.654584   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:17:59.719274   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:17:59.719309   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:02.234489   49443 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0213 23:18:02.240412   49443 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0213 23:18:02.241675   49443 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:02.241699   49443 api_server.go:131] duration metric: took 4.046700263s to wait for apiserver health ...
	I0213 23:18:02.241710   49443 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:02.241735   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:02.241796   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:02.289133   49443 cri.go:89] found id: "746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:02.289158   49443 cri.go:89] found id: ""
	I0213 23:18:02.289166   49443 logs.go:276] 1 containers: [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a]
	I0213 23:18:02.289212   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.295450   49443 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:02.295527   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:02.342262   49443 cri.go:89] found id: "fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:02.342285   49443 cri.go:89] found id: ""
	I0213 23:18:02.342292   49443 logs.go:276] 1 containers: [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e]
	I0213 23:18:02.342337   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.346810   49443 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:02.346874   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:02.385638   49443 cri.go:89] found id: "5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:02.385665   49443 cri.go:89] found id: ""
	I0213 23:18:02.385673   49443 logs.go:276] 1 containers: [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7]
	I0213 23:18:02.385725   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.389834   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:02.389920   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:02.435078   49443 cri.go:89] found id: "404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:02.435110   49443 cri.go:89] found id: ""
	I0213 23:18:02.435121   49443 logs.go:276] 1 containers: [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95]
	I0213 23:18:02.435184   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.440237   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:02.440297   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:02.483869   49443 cri.go:89] found id: "92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.483891   49443 cri.go:89] found id: ""
	I0213 23:18:02.483899   49443 logs.go:276] 1 containers: [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c]
	I0213 23:18:02.483942   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.490454   49443 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:02.490532   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:02.540971   49443 cri.go:89] found id: "59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:02.541000   49443 cri.go:89] found id: ""
	I0213 23:18:02.541010   49443 logs.go:276] 1 containers: [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847]
	I0213 23:18:02.541069   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.545818   49443 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:02.545906   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:02.593132   49443 cri.go:89] found id: ""
	I0213 23:18:02.593159   49443 logs.go:276] 0 containers: []
	W0213 23:18:02.593166   49443 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:02.593172   49443 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:02.593222   49443 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:02.634979   49443 cri.go:89] found id: "9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.635015   49443 cri.go:89] found id: ""
	I0213 23:18:02.635028   49443 logs.go:276] 1 containers: [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2]
	I0213 23:18:02.635089   49443 ssh_runner.go:195] Run: which crictl
	I0213 23:18:02.640246   49443 logs.go:123] Gathering logs for kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] ...
	I0213 23:18:02.640274   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c"
	I0213 23:18:02.681426   49443 logs.go:123] Gathering logs for storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] ...
	I0213 23:18:02.681458   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2"
	I0213 23:18:02.721033   49443 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:02.721062   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:03.049340   49443 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:03.049385   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0213 23:18:03.154378   49443 logs.go:123] Gathering logs for kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] ...
	I0213 23:18:03.154417   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a"
	I0213 23:18:03.215045   49443 logs.go:123] Gathering logs for etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] ...
	I0213 23:18:03.215081   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e"
	I0213 23:18:03.260291   49443 logs.go:123] Gathering logs for kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] ...
	I0213 23:18:03.260320   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95"
	I0213 23:18:03.323526   49443 logs.go:123] Gathering logs for container status ...
	I0213 23:18:03.323565   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:03.378686   49443 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:03.378731   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:03.406717   49443 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:03.406742   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:03.547999   49443 logs.go:123] Gathering logs for coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] ...
	I0213 23:18:03.548035   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7"
	I0213 23:18:03.593226   49443 logs.go:123] Gathering logs for kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] ...
	I0213 23:18:03.593255   49443 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847"
	I0213 23:18:06.160914   49443 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:06.160954   49443 system_pods.go:61] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.160963   49443 system_pods.go:61] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.160970   49443 system_pods.go:61] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.160977   49443 system_pods.go:61] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.160996   49443 system_pods.go:61] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.161008   49443 system_pods.go:61] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.161018   49443 system_pods.go:61] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.161025   49443 system_pods.go:61] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.161035   49443 system_pods.go:74] duration metric: took 3.919318115s to wait for pod list to return data ...
	I0213 23:18:06.161046   49443 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:06.165231   49443 default_sa.go:45] found service account: "default"
	I0213 23:18:06.165262   49443 default_sa.go:55] duration metric: took 4.207834ms for default service account to be created ...
	I0213 23:18:06.165271   49443 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:06.172453   49443 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:06.172488   49443 system_pods.go:89] "coredns-5dd5756b68-vrbjt" [74c7f72d-10b1-467f-92ac-2888540bd3a5] Running
	I0213 23:18:06.172494   49443 system_pods.go:89] "etcd-embed-certs-340656" [ac1f4941-bbd0-4245-ba0e-0fb1785b9c21] Running
	I0213 23:18:06.172499   49443 system_pods.go:89] "kube-apiserver-embed-certs-340656" [2c2b8777-b101-41f1-ad98-242ecb26dd4e] Running
	I0213 23:18:06.172503   49443 system_pods.go:89] "kube-controller-manager-embed-certs-340656" [32f0e953-c3c9-49ed-ab2b-0df0a0a0fa40] Running
	I0213 23:18:06.172507   49443 system_pods.go:89] "kube-proxy-4vgt5" [456eb472-9014-4674-b03c-8e2a0997455b] Running
	I0213 23:18:06.172512   49443 system_pods.go:89] "kube-scheduler-embed-certs-340656" [9b3b89bc-ea04-4476-b912-8180467a4c28] Running
	I0213 23:18:06.172517   49443 system_pods.go:89] "metrics-server-57f55c9bc5-9vcz5" [8df81e37-71b7-4220-9652-070538ce5a7f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:06.172522   49443 system_pods.go:89] "storage-provisioner" [1cdcb32e-024c-4055-b02f-807b7cc69b74] Running
	I0213 23:18:06.172531   49443 system_pods.go:126] duration metric: took 7.254871ms to wait for k8s-apps to be running ...
	I0213 23:18:06.172541   49443 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:06.172598   49443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:06.193026   49443 system_svc.go:56] duration metric: took 20.479072ms WaitForService to wait for kubelet.
	I0213 23:18:06.193051   49443 kubeadm.go:581] duration metric: took 4m16.886913912s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:06.193072   49443 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:06.196910   49443 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:06.196940   49443 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:06.196951   49443 node_conditions.go:105] duration metric: took 3.874223ms to run NodePressure ...
	I0213 23:18:06.196962   49443 start.go:228] waiting for startup goroutines ...
	I0213 23:18:06.196968   49443 start.go:233] waiting for cluster config update ...
	I0213 23:18:06.196977   49443 start.go:242] writing updated cluster config ...
	I0213 23:18:06.197233   49443 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:06.248295   49443 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:06.250392   49443 out.go:177] * Done! kubectl is now configured to use "embed-certs-340656" cluster and "default" namespace by default
	I0213 23:18:04.455358   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:06.953191   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.954115   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:10.954853   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:08.832437   49120 api_server.go:253] Checking apiserver healthz at https://192.168.83.31:8443/healthz ...
	I0213 23:18:08.838687   49120 api_server.go:279] https://192.168.83.31:8443/healthz returned 200:
	ok
	I0213 23:18:08.839999   49120 api_server.go:141] control plane version: v1.29.0-rc.2
	I0213 23:18:08.840021   49120 api_server.go:131] duration metric: took 11.488768389s to wait for apiserver health ...
	I0213 23:18:08.840031   49120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:08.840058   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:08.840122   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:08.891532   49120 cri.go:89] found id: "a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:08.891559   49120 cri.go:89] found id: ""
	I0213 23:18:08.891567   49120 logs.go:276] 1 containers: [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2]
	I0213 23:18:08.891618   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.896712   49120 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:08.896802   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:08.943555   49120 cri.go:89] found id: "75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:08.943584   49120 cri.go:89] found id: ""
	I0213 23:18:08.943593   49120 logs.go:276] 1 containers: [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a]
	I0213 23:18:08.943654   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:08.948658   49120 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:08.948730   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:08.995867   49120 cri.go:89] found id: "bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:08.995896   49120 cri.go:89] found id: ""
	I0213 23:18:08.995905   49120 logs.go:276] 1 containers: [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2]
	I0213 23:18:08.995970   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.000810   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:09.000883   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:09.046606   49120 cri.go:89] found id: "f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.046636   49120 cri.go:89] found id: ""
	I0213 23:18:09.046646   49120 logs.go:276] 1 containers: [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2]
	I0213 23:18:09.046706   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.050924   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:09.050986   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:09.097414   49120 cri.go:89] found id: "6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.097445   49120 cri.go:89] found id: ""
	I0213 23:18:09.097456   49120 logs.go:276] 1 containers: [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c]
	I0213 23:18:09.097525   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.102101   49120 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:09.102177   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:09.164244   49120 cri.go:89] found id: "1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.164267   49120 cri.go:89] found id: ""
	I0213 23:18:09.164274   49120 logs.go:276] 1 containers: [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab]
	I0213 23:18:09.164323   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.169164   49120 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:09.169238   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:09.217068   49120 cri.go:89] found id: ""
	I0213 23:18:09.217094   49120 logs.go:276] 0 containers: []
	W0213 23:18:09.217101   49120 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:09.217106   49120 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:09.217174   49120 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:09.256986   49120 cri.go:89] found id: "032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.257017   49120 cri.go:89] found id: ""
	I0213 23:18:09.257028   49120 logs.go:276] 1 containers: [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762]
	I0213 23:18:09.257088   49120 ssh_runner.go:195] Run: which crictl
	I0213 23:18:09.261602   49120 logs.go:123] Gathering logs for kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] ...
	I0213 23:18:09.261625   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2"
	I0213 23:18:09.314910   49120 logs.go:123] Gathering logs for kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] ...
	I0213 23:18:09.314957   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c"
	I0213 23:18:09.361576   49120 logs.go:123] Gathering logs for kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] ...
	I0213 23:18:09.361609   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab"
	I0213 23:18:09.433243   49120 logs.go:123] Gathering logs for storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] ...
	I0213 23:18:09.433281   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762"
	I0213 23:18:09.485648   49120 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:09.485698   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:09.634091   49120 logs.go:123] Gathering logs for kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] ...
	I0213 23:18:09.634127   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2"
	I0213 23:18:09.681649   49120 logs.go:123] Gathering logs for etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] ...
	I0213 23:18:09.681689   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a"
	I0213 23:18:09.729410   49120 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:09.729449   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:10.100058   49120 logs.go:123] Gathering logs for container status ...
	I0213 23:18:10.100104   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:10.156178   49120 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:10.156209   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:10.229188   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.229358   49120 logs.go:138] Found kubelet problem: Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.251947   49120 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:10.251987   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:10.268224   49120 logs.go:123] Gathering logs for coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] ...
	I0213 23:18:10.268251   49120 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2"
	I0213 23:18:10.319580   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319608   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:10.319651   49120 out.go:239] X Problems detected in kubelet:
	W0213 23:18:10.319663   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: W0213 23:13:41.360864    4323 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	W0213 23:18:10.319673   49120 out.go:239]   Feb 13 23:13:41 no-preload-778731 kubelet[4323]: E0213 23:13:41.360925    4323 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-778731" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-778731' and this object
	I0213 23:18:10.319685   49120 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:10.319696   49120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:13.453597   49715 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace has status "Ready":"False"
	I0213 23:18:15.445609   49715 pod_ready.go:81] duration metric: took 4m0.000451749s waiting for pod "metrics-server-57f55c9bc5-rkg49" in "kube-system" namespace to be "Ready" ...
	E0213 23:18:15.445643   49715 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0213 23:18:15.445653   49715 pod_ready.go:38] duration metric: took 4m2.428270702s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:18:15.445670   49715 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:18:15.445716   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:15.445773   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:15.501757   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:15.501791   49715 cri.go:89] found id: ""
	I0213 23:18:15.501802   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:15.501863   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.507658   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:15.507738   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:15.552164   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:15.552197   49715 cri.go:89] found id: ""
	I0213 23:18:15.552204   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:15.552257   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.557704   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:15.557764   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:15.606147   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:15.606168   49715 cri.go:89] found id: ""
	I0213 23:18:15.606175   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:15.606231   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.610863   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:15.610939   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:15.655298   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:15.655320   49715 cri.go:89] found id: ""
	I0213 23:18:15.655329   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:15.655387   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.660000   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:15.660062   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:15.699700   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:15.699735   49715 cri.go:89] found id: ""
	I0213 23:18:15.699745   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:15.699815   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.704535   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:15.704614   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:15.746999   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:15.747028   49715 cri.go:89] found id: ""
	I0213 23:18:15.747038   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:15.747091   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.752065   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:15.752137   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:15.793372   49715 cri.go:89] found id: ""
	I0213 23:18:15.793404   49715 logs.go:276] 0 containers: []
	W0213 23:18:15.793415   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:15.793422   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:15.793487   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:15.839630   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:15.839660   49715 cri.go:89] found id: ""
	I0213 23:18:15.839668   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:15.839723   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:15.844199   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:15.844225   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:15.904450   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:15.904479   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:15.925777   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:15.925805   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:16.079602   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:16.079634   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:16.121369   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:16.121400   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:16.174404   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:16.174440   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:16.216286   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:16.216321   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:16.629527   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:16.629564   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:16.708003   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.708235   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.729748   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:16.729784   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:16.784398   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:16.784432   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:16.829885   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:16.829923   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:16.872036   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:16.872066   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:16.937327   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937359   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:16.937411   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:16.937421   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:16.937431   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:16.937441   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:16.937449   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:20.329462   49120 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:20.329500   49120 system_pods.go:61] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.329508   49120 system_pods.go:61] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.329515   49120 system_pods.go:61] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.329521   49120 system_pods.go:61] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.329527   49120 system_pods.go:61] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.329533   49120 system_pods.go:61] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.329543   49120 system_pods.go:61] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.329550   49120 system_pods.go:61] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.329560   49120 system_pods.go:74] duration metric: took 11.489522059s to wait for pod list to return data ...
	I0213 23:18:20.329569   49120 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:20.332784   49120 default_sa.go:45] found service account: "default"
	I0213 23:18:20.332809   49120 default_sa.go:55] duration metric: took 3.233136ms for default service account to be created ...
	I0213 23:18:20.332817   49120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:20.339002   49120 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:20.339033   49120 system_pods.go:89] "coredns-76f75df574-f4g5w" [4ddbeb6e-f3b0-48d8-82f1-f824568835c7] Running
	I0213 23:18:20.339042   49120 system_pods.go:89] "etcd-no-preload-778731" [e5b8d90f-d7c4-4c3d-992b-3d851cf554fb] Running
	I0213 23:18:20.339049   49120 system_pods.go:89] "kube-apiserver-no-preload-778731" [7fc84f55-e6c6-42bc-b4f9-2caa26c8690e] Running
	I0213 23:18:20.339056   49120 system_pods.go:89] "kube-controller-manager-no-preload-778731" [86381098-bfcc-489a-9792-629887dd475b] Running
	I0213 23:18:20.339063   49120 system_pods.go:89] "kube-proxy-7vcqq" [18dc29be-3e93-4a62-ad66-7838671cdd21] Running
	I0213 23:18:20.339070   49120 system_pods.go:89] "kube-scheduler-no-preload-778731" [06ac8ed1-3c4a-4033-b8a6-6713956d4e3f] Running
	I0213 23:18:20.339084   49120 system_pods.go:89] "metrics-server-57f55c9bc5-mt6qd" [9726753d-b785-48dc-81d7-86a787851927] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:20.339093   49120 system_pods.go:89] "storage-provisioner" [5751d5c1-158a-46dc-b2ec-f74cc302de35] Running
	I0213 23:18:20.339116   49120 system_pods.go:126] duration metric: took 6.292649ms to wait for k8s-apps to be running ...
	I0213 23:18:20.339125   49120 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:20.339183   49120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:20.354459   49120 system_svc.go:56] duration metric: took 15.325743ms WaitForService to wait for kubelet.
	I0213 23:18:20.354488   49120 kubeadm.go:581] duration metric: took 4m38.510005999s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:20.354505   49120 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:20.358160   49120 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:20.358186   49120 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:20.358195   49120 node_conditions.go:105] duration metric: took 3.685402ms to run NodePressure ...
	I0213 23:18:20.358205   49120 start.go:228] waiting for startup goroutines ...
	I0213 23:18:20.358211   49120 start.go:233] waiting for cluster config update ...
	I0213 23:18:20.358220   49120 start.go:242] writing updated cluster config ...
	I0213 23:18:20.358527   49120 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:20.409811   49120 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0213 23:18:20.412251   49120 out.go:177] * Done! kubectl is now configured to use "no-preload-778731" cluster and "default" namespace by default
	I0213 23:18:26.939087   49715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:18:26.956231   49715 api_server.go:72] duration metric: took 4m16.268553955s to wait for apiserver process to appear ...
	I0213 23:18:26.956259   49715 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:18:26.956317   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:26.956382   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:27.006428   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.006455   49715 cri.go:89] found id: ""
	I0213 23:18:27.006465   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:27.006527   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.011468   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:27.011542   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:27.054309   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.054334   49715 cri.go:89] found id: ""
	I0213 23:18:27.054344   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:27.054393   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.058925   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:27.058979   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:27.101942   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.101971   49715 cri.go:89] found id: ""
	I0213 23:18:27.101981   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:27.102041   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.107540   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:27.107609   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:27.152126   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.152150   49715 cri.go:89] found id: ""
	I0213 23:18:27.152157   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:27.152203   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.156537   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:27.156608   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:27.202931   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:27.202952   49715 cri.go:89] found id: ""
	I0213 23:18:27.202959   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:27.203006   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.209339   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:27.209405   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:27.250771   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:27.250814   49715 cri.go:89] found id: ""
	I0213 23:18:27.250828   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:27.250898   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.255547   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:27.255621   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:27.297645   49715 cri.go:89] found id: ""
	I0213 23:18:27.297679   49715 logs.go:276] 0 containers: []
	W0213 23:18:27.297689   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:27.297697   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:27.297765   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:27.340690   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.340719   49715 cri.go:89] found id: ""
	I0213 23:18:27.340728   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:27.340786   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:27.345308   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:27.345338   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:27.481620   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:27.481653   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:27.541421   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:27.541456   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:27.594527   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:27.594559   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:27.657323   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:27.657358   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:27.710198   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:27.710234   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:27.750419   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:27.750451   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:28.148333   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:28.148374   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:28.162927   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:28.162959   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:28.214802   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:28.214835   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:28.264035   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:28.264061   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:28.328849   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:28.328888   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:28.408683   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.408859   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429691   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429721   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:28.429772   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:28.429780   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:28.429787   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:28.429793   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:28.429798   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:38.431065   49715 api_server.go:253] Checking apiserver healthz at https://192.168.39.3:8444/healthz ...
	I0213 23:18:38.438496   49715 api_server.go:279] https://192.168.39.3:8444/healthz returned 200:
	ok
	I0213 23:18:38.440109   49715 api_server.go:141] control plane version: v1.28.4
	I0213 23:18:38.440131   49715 api_server.go:131] duration metric: took 11.483865303s to wait for apiserver health ...
	I0213 23:18:38.440139   49715 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:18:38.440163   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0213 23:18:38.440218   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0213 23:18:38.485767   49715 cri.go:89] found id: "fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:38.485791   49715 cri.go:89] found id: ""
	I0213 23:18:38.485798   49715 logs.go:276] 1 containers: [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6]
	I0213 23:18:38.485847   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.490804   49715 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0213 23:18:38.490876   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0213 23:18:38.540174   49715 cri.go:89] found id: "d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:38.540196   49715 cri.go:89] found id: ""
	I0213 23:18:38.540203   49715 logs.go:276] 1 containers: [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65]
	I0213 23:18:38.540247   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.545816   49715 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0213 23:18:38.545904   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0213 23:18:38.593443   49715 cri.go:89] found id: "54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:38.593466   49715 cri.go:89] found id: ""
	I0213 23:18:38.593474   49715 logs.go:276] 1 containers: [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72]
	I0213 23:18:38.593531   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.598567   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0213 23:18:38.598642   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0213 23:18:38.646508   49715 cri.go:89] found id: "5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:38.646539   49715 cri.go:89] found id: ""
	I0213 23:18:38.646549   49715 logs.go:276] 1 containers: [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae]
	I0213 23:18:38.646605   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.651425   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0213 23:18:38.651489   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0213 23:18:38.695133   49715 cri.go:89] found id: "cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:38.695157   49715 cri.go:89] found id: ""
	I0213 23:18:38.695166   49715 logs.go:276] 1 containers: [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca]
	I0213 23:18:38.695226   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.700446   49715 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0213 23:18:38.700504   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0213 23:18:38.748214   49715 cri.go:89] found id: "090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.748251   49715 cri.go:89] found id: ""
	I0213 23:18:38.748261   49715 logs.go:276] 1 containers: [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647]
	I0213 23:18:38.748319   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.753466   49715 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0213 23:18:38.753532   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0213 23:18:38.796480   49715 cri.go:89] found id: ""
	I0213 23:18:38.796505   49715 logs.go:276] 0 containers: []
	W0213 23:18:38.796514   49715 logs.go:278] No container was found matching "kindnet"
	I0213 23:18:38.796521   49715 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0213 23:18:38.796597   49715 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0213 23:18:38.838145   49715 cri.go:89] found id: "b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.838189   49715 cri.go:89] found id: ""
	I0213 23:18:38.838199   49715 logs.go:276] 1 containers: [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17]
	I0213 23:18:38.838259   49715 ssh_runner.go:195] Run: which crictl
	I0213 23:18:38.844252   49715 logs.go:123] Gathering logs for kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] ...
	I0213 23:18:38.844279   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647"
	I0213 23:18:38.919402   49715 logs.go:123] Gathering logs for storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] ...
	I0213 23:18:38.919442   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17"
	I0213 23:18:38.963733   49715 logs.go:123] Gathering logs for container status ...
	I0213 23:18:38.963767   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0213 23:18:39.013301   49715 logs.go:123] Gathering logs for describe nodes ...
	I0213 23:18:39.013336   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0213 23:18:39.142161   49715 logs.go:123] Gathering logs for kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] ...
	I0213 23:18:39.142192   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae"
	I0213 23:18:39.199423   49715 logs.go:123] Gathering logs for kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] ...
	I0213 23:18:39.199455   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca"
	I0213 23:18:39.245639   49715 logs.go:123] Gathering logs for etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] ...
	I0213 23:18:39.245669   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65"
	I0213 23:18:39.290916   49715 logs.go:123] Gathering logs for coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] ...
	I0213 23:18:39.290954   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72"
	I0213 23:18:39.343373   49715 logs.go:123] Gathering logs for CRI-O ...
	I0213 23:18:39.343405   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0213 23:18:39.700393   49715 logs.go:123] Gathering logs for kubelet ...
	I0213 23:18:39.700441   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0213 23:18:39.777386   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.777564   49715 logs.go:138] Found kubelet problem: Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.800035   49715 logs.go:123] Gathering logs for dmesg ...
	I0213 23:18:39.800087   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0213 23:18:39.817941   49715 logs.go:123] Gathering logs for kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] ...
	I0213 23:18:39.817972   49715 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6"
	I0213 23:18:39.870635   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870675   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0213 23:18:39.870733   49715 out.go:239] X Problems detected in kubelet:
	W0213 23:18:39.870744   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: W0213 23:14:10.254369    3817 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	W0213 23:18:39.870749   49715 out.go:239]   Feb 13 23:14:10 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:14:10.254435    3817 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-083863" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-083863' and this object
	I0213 23:18:39.870756   49715 out.go:304] Setting ErrFile to fd 2...
	I0213 23:18:39.870764   49715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:18:49.878184   49715 system_pods.go:59] 8 kube-system pods found
	I0213 23:18:49.878220   49715 system_pods.go:61] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.878229   49715 system_pods.go:61] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.878237   49715 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.878244   49715 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.878250   49715 system_pods.go:61] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.878256   49715 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.878268   49715 system_pods.go:61] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.878276   49715 system_pods.go:61] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.878284   49715 system_pods.go:74] duration metric: took 11.438139039s to wait for pod list to return data ...
	I0213 23:18:49.878294   49715 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:18:49.881702   49715 default_sa.go:45] found service account: "default"
	I0213 23:18:49.881730   49715 default_sa.go:55] duration metric: took 3.42943ms for default service account to be created ...
	I0213 23:18:49.881741   49715 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:18:49.888356   49715 system_pods.go:86] 8 kube-system pods found
	I0213 23:18:49.888380   49715 system_pods.go:89] "coredns-5dd5756b68-zfscd" [98a75f73-94a2-4566-9b70-74d5ed759628] Running
	I0213 23:18:49.888385   49715 system_pods.go:89] "etcd-default-k8s-diff-port-083863" [91d585fe-7a8e-4700-9881-1a03b350351c] Running
	I0213 23:18:49.888392   49715 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-083863" [72eb5ec2-9cab-4573-b224-5b09c4a1eca2] Running
	I0213 23:18:49.888397   49715 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-083863" [f1b50108-2c33-49d2-b1d0-be9f8b395a06] Running
	I0213 23:18:49.888403   49715 system_pods.go:89] "kube-proxy-kvz2b" [54f06cac-d864-49cc-a00f-803d6f6333a3] Running
	I0213 23:18:49.888409   49715 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-083863" [c8856d1f-4667-4a05-b8b1-5c690a48c326] Running
	I0213 23:18:49.888422   49715 system_pods.go:89] "metrics-server-57f55c9bc5-rkg49" [d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0213 23:18:49.888434   49715 system_pods.go:89] "storage-provisioner" [bba2cb47-d726-4852-a704-b315daa0f646] Running
	I0213 23:18:49.888446   49715 system_pods.go:126] duration metric: took 6.698139ms to wait for k8s-apps to be running ...
	I0213 23:18:49.888456   49715 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:18:49.888497   49715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:18:49.905396   49715 system_svc.go:56] duration metric: took 16.928016ms WaitForService to wait for kubelet.
	I0213 23:18:49.905427   49715 kubeadm.go:581] duration metric: took 4m39.217754888s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:18:49.905452   49715 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:18:49.909261   49715 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:18:49.909296   49715 node_conditions.go:123] node cpu capacity is 2
	I0213 23:18:49.909312   49715 node_conditions.go:105] duration metric: took 3.854435ms to run NodePressure ...
	I0213 23:18:49.909326   49715 start.go:228] waiting for startup goroutines ...
	I0213 23:18:49.909334   49715 start.go:233] waiting for cluster config update ...
	I0213 23:18:49.909347   49715 start.go:242] writing updated cluster config ...
	I0213 23:18:49.909654   49715 ssh_runner.go:195] Run: rm -f paused
	I0213 23:18:49.961022   49715 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:18:49.963033   49715 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-083863" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:09:02 UTC, ends at Tue 2024-02-13 23:27:59 UTC. --
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.051934214Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866879051916783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=906ba882-33d7-4406-bd65-87d3b8edca5d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.052747821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f1bd5ff-1542-498b-85ab-ad6c191f2432 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.052841907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f1bd5ff-1542-498b-85ab-ad6c191f2432 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.053109072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f1bd5ff-1542-498b-85ab-ad6c191f2432 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.097803410Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=79d607a6-20f0-4225-838a-48139c5c8b4f name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.097870066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=79d607a6-20f0-4225-838a-48139c5c8b4f name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.099046740Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=053e9593-e504-4a27-8f3c-6dd3410a176b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.099434313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866879099420444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=053e9593-e504-4a27-8f3c-6dd3410a176b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.100180432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f1f19043-8aac-48d0-84b3-d51b9e35130e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.100257269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f1f19043-8aac-48d0-84b3-d51b9e35130e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.100465058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f1f19043-8aac-48d0-84b3-d51b9e35130e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.148335898Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c0c95f59-2d16-4eb9-bf89-85fc795551e7 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.148420402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c0c95f59-2d16-4eb9-bf89-85fc795551e7 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.150750644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=da94b364-83c4-4eee-9783-57a1f1647be9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.151198676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866879151177065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=da94b364-83c4-4eee-9783-57a1f1647be9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.152504553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2bdbbbdc-adf1-4774-af28-d420172f7555 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.152668925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2bdbbbdc-adf1-4774-af28-d420172f7555 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.152865041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2bdbbbdc-adf1-4774-af28-d420172f7555 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.194721313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1215387f-1537-4be0-9c56-1f478f6aadb7 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.194783300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1215387f-1537-4be0-9c56-1f478f6aadb7 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.196699057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=718cc7f3-4034-46e7-b66f-70f986e4dacf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.197068083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866879197054324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=718cc7f3-4034-46e7-b66f-70f986e4dacf name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.198236716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5539a0d7-1f60-49de-b9bf-9889d42edc8d name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.198313135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5539a0d7-1f60-49de-b9bf-9889d42edc8d name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:27:59 old-k8s-version-245122 crio[715]: time="2024-02-13 23:27:59.198500631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707865817596176993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9549484d35e9603bafe39345a8e56ffa2d984ccb8dacb2d66f3e4a101ce7ec,PodSandboxId:1ea1776a6fd35ca2ab9afa0bb1a143b4a1466b1ba66e4ae10d29fa3b2eca6d79,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1707865787851434815,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c64fb331-f46d-44fb-a6fe-cc7e421d13ee,},Annotations:map[string]string{io.kubernetes.container.hash: 95732012,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1,PodSandboxId:0ee1de177ef1f21f8eea3363ae34573a7e443cc7ff007d7dff47f5cdf52cbf9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1707865786414970114,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3977149-1877-4180-b568-72c5ae81788f,},Annotations:map[string]string{io.kubernetes.container.hash: b8443d9d,io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19,PodSandboxId:dff35a34c018d8319a31a48ef7b301fb3632a0afa70ff741e83ce278e2839649,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1707865786657698883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-kr6t9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c060820-1e79-4e3e-92d8-ec77f75741c4,},Annotations:map[string]string{io.kubernetes.container.hash: be0dbf9d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b,PodSandboxId:6985f34c8dfeb50600e8f16186268e8f53a87509a92aca7fba51edb26874c1bc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1707865785091063570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nj7qx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efb1b13-7f14-49bd-aacf-6
00b7733cbe0,},Annotations:map[string]string{io.kubernetes.container.hash: e91a05eb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d,PodSandboxId:3cbb9b1b585e8f75808457180403e258ae55041931275d7c60a2117e30376945,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1707865777284036542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd95658e6d145feff7b098e46f743938,},Annotations:map[string]string{io.kube
rnetes.container.hash: 257413da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022,PodSandboxId:c235794c2618dd93dfce3f21888a20d44d9d66b1c881dd668e121101faceb77b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1707865776184807125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18b4e74ab253fe005b68903242f6bc8,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: b01f3b00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960,PodSandboxId:39d88fda12f10a15e415f0f20b1739a31af03b6ea26abf68ece3877102a192d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1707865775708434429,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.container.hash:
69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15,PodSandboxId:fbee32e09e8bdb9c17b254a1e534dfed543235237b0d90b7a6c504e6a4adb8ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1707865775627511677,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-245122,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]string{io.k
ubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5539a0d7-1f60-49de-b9bf-9889d42edc8d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab470e6a37deb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       1                   0ee1de177ef1f       storage-provisioner
	6f9549484d35e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   0                   1ea1776a6fd35       busybox
	2cabfb623c7fb       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   dff35a34c018d       coredns-5644d7b6d9-kr6t9
	9609117f701bf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       0                   0ee1de177ef1f       storage-provisioner
	f43c15c3d3903       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   6985f34c8dfeb       kube-proxy-nj7qx
	5926aa9fbfac6       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   3cbb9b1b585e8       etcd-old-k8s-version-245122
	2ec1e75ab6923       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   c235794c2618d       kube-apiserver-old-k8s-version-245122
	1626274a7b38f       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   39d88fda12f10       kube-scheduler-old-k8s-version-245122
	b4b01d14f2ef4       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   fbee32e09e8bd       kube-controller-manager-old-k8s-version-245122
	
	
	==> coredns [2cabfb623c7fb5ce8bdb1410fe1efff76f118d9bd5970ee1698b941fa387ba19] <==
	2024-02-13T23:09:51.877Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-02-13T23:09:51.890Z [INFO] 127.0.0.1:49025 - 59187 "HINFO IN 5388163579779728481.5269519262384264271. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013163185s
	2024-02-13T23:09:53.712Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2024-02-13T23:10:03.712Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	2024-02-13T23:10:13.712Z [INFO] plugin/ready: Still waiting on: "kubernetes"
	I0213 23:10:16.877224       1 trace.go:82] Trace[1240964328]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-02-13 23:09:46.876636146 +0000 UTC m=+0.045899897) (total time: 30.000549497s):
	Trace[1240964328]: [30.000549497s] [30.000549497s] END
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0213 23:10:16.877797       1 trace.go:82] Trace[85575035]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-02-13 23:09:46.877466157 +0000 UTC m=+0.046729882) (total time: 30.000283389s):
	Trace[85575035]: [30.000283389s] [30.000283389s] END
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0213 23:10:16.877994       1 trace.go:82] Trace[26344488]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2024-02-13 23:09:46.877258222 +0000 UTC m=+0.046521949) (total time: 30.000718418s):
	Trace[26344488]: [30.000718418s] [30.000718418s] END
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877323       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.877839       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0213 23:10:16.878035       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-245122
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-245122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=old-k8s-version-245122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T22_58_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 22:58:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:27:13 +0000   Tue, 13 Feb 2024 22:58:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:27:13 +0000   Tue, 13 Feb 2024 22:58:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:27:13 +0000   Tue, 13 Feb 2024 22:58:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:27:13 +0000   Tue, 13 Feb 2024 23:09:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    old-k8s-version-245122
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 3817d3973781432fa9a183fb2b2072e7
	 System UUID:                3817d397-3781-432f-a9a1-83fb2b2072e7
	 Boot ID:                    76248c73-daaa-4ecd-ab96-a014cd915ca9
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-kr6t9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-245122                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-apiserver-old-k8s-version-245122             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-245122    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-nj7qx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-245122             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                metrics-server-74d5856cc6-c6rp6                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-245122  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-245122     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet, old-k8s-version-245122     Node old-k8s-version-245122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-245122     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-245122  Starting kube-proxy.
	
	
	==> dmesg <==
	[Feb13 23:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.084539] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.181843] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Feb13 23:09] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160862] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.563977] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.608895] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.132022] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.183865] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.126986] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.286935] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +19.008648] systemd-fstab-generator[1017]: Ignoring "noauto" for root device
	[  +0.484880] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.367555] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [5926aa9fbfac6360f2a1422c7f894aeab940a36a82ac0d2b76d90b1fee39e60d] <==
	2024-02-13 23:09:37.385465 I | raft: e5487579cc149d4d became follower at term 2
	2024-02-13 23:09:37.385506 I | raft: newRaft e5487579cc149d4d [peers: [], term: 2, commit: 533, applied: 0, lastindex: 533, lastterm: 2]
	2024-02-13 23:09:37.394993 W | auth: simple token is not cryptographically signed
	2024-02-13 23:09:37.398280 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-02-13 23:09:37.399815 I | etcdserver/membership: added member e5487579cc149d4d [https://192.168.50.36:2380] to cluster 31bd1a1c1ff06930
	2024-02-13 23:09:37.399995 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-02-13 23:09:37.400084 I | etcdserver/api: enabled capabilities for version 3.3
	2024-02-13 23:09:37.404413 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-13 23:09:37.404709 I | embed: listening for metrics on http://192.168.50.36:2381
	2024-02-13 23:09:37.405098 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-02-13 23:09:39.186202 I | raft: e5487579cc149d4d is starting a new election at term 2
	2024-02-13 23:09:39.186297 I | raft: e5487579cc149d4d became candidate at term 3
	2024-02-13 23:09:39.186322 I | raft: e5487579cc149d4d received MsgVoteResp from e5487579cc149d4d at term 3
	2024-02-13 23:09:39.186348 I | raft: e5487579cc149d4d became leader at term 3
	2024-02-13 23:09:39.186365 I | raft: raft.node: e5487579cc149d4d elected leader e5487579cc149d4d at term 3
	2024-02-13 23:09:39.186861 I | etcdserver: published {Name:old-k8s-version-245122 ClientURLs:[https://192.168.50.36:2379]} to cluster 31bd1a1c1ff06930
	2024-02-13 23:09:39.187069 I | embed: ready to serve client requests
	2024-02-13 23:09:39.187499 I | embed: ready to serve client requests
	2024-02-13 23:09:39.189081 I | embed: serving client requests on 192.168.50.36:2379
	2024-02-13 23:09:39.190074 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-13 23:09:45.898892 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2026" took too long (109.346107ms) to execute
	2024-02-13 23:19:39.212602 I | mvcc: store.index: compact 830
	2024-02-13 23:19:39.215145 I | mvcc: finished scheduled compaction at 830 (took 2.071991ms)
	2024-02-13 23:24:39.219415 I | mvcc: store.index: compact 1048
	2024-02-13 23:24:39.220866 I | mvcc: finished scheduled compaction at 1048 (took 1.00942ms)
	
	
	==> kernel <==
	 23:27:59 up 19 min,  0 users,  load average: 0.28, 0.18, 0.16
	Linux old-k8s-version-245122 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [2ec1e75ab6923a3f9a84eb0805a1af78b3a9da0c3c21e254153b43317dc07022] <==
	I0213 23:20:43.549653       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:20:43.549893       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:20:43.549952       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:20:43.549960       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:22:43.550428       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:22:43.550594       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:22:43.550656       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:22:43.550671       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:24:43.552867       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:24:43.552973       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:24:43.553045       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:24:43.553061       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:25:43.553606       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:25:43.553745       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:25:43.553829       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:25:43.553841       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:27:43.554330       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0213 23:27:43.554816       1 handler_proxy.go:99] no RequestInfo found in the context
	E0213 23:27:43.554903       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:27:43.554955       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b4b01d14f2ef43f405c6b92b66d3b8302badbc0d091a86e422364bb673a77b15] <==
	E0213 23:21:35.685026       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:21:43.942137       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:22:05.937760       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:22:15.944931       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:22:36.189843       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:22:47.950318       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:23:06.442704       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:23:19.953104       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:23:36.695975       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:23:51.955155       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:24:06.948751       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:24:23.957915       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:24:37.201029       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:24:55.960338       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:25:07.453440       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:25:27.962649       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:25:37.705912       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:25:59.965022       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:26:07.958779       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:26:31.968020       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:26:38.210982       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:27:03.970659       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:27:08.463904       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0213 23:27:35.973227       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0213 23:27:38.715871       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [f43c15c3d3903edb8eabcc4ea8d9664f052c5562a5e8b036d9546726db7da54b] <==
	W0213 22:59:15.934244       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0213 22:59:15.965583       1 node.go:135] Successfully retrieved node IP: 192.168.50.36
	I0213 22:59:15.965692       1 server_others.go:149] Using iptables Proxier.
	I0213 22:59:15.975303       1 server.go:529] Version: v1.16.0
	I0213 22:59:15.982847       1 config.go:313] Starting service config controller
	I0213 22:59:15.983698       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0213 22:59:15.982982       1 config.go:131] Starting endpoints config controller
	I0213 22:59:15.984963       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0213 22:59:16.084279       1 shared_informer.go:204] Caches are synced for service config 
	I0213 22:59:16.088403       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0213 23:09:46.222169       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0213 23:09:46.231887       1 node.go:135] Successfully retrieved node IP: 192.168.50.36
	I0213 23:09:46.231943       1 server_others.go:149] Using iptables Proxier.
	I0213 23:09:46.233169       1 server.go:529] Version: v1.16.0
	I0213 23:09:46.234957       1 config.go:313] Starting service config controller
	I0213 23:09:46.235037       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0213 23:09:46.236737       1 config.go:131] Starting endpoints config controller
	I0213 23:09:46.236795       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0213 23:09:46.337457       1 shared_informer.go:204] Caches are synced for service config 
	I0213 23:09:46.337869       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [1626274a7b38f20a8de3c3024e320b1672be2089e34da8d3bf0482d87afa7960] <==
	E0213 22:58:53.777631       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:58:54.752094       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 22:58:54.759113       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 22:58:54.769148       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 22:58:54.770103       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 22:58:54.771879       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 22:58:54.771956       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 22:58:54.773539       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 22:58:54.779005       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 22:58:54.782515       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 22:58:54.786984       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 22:58:54.792125       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 22:59:14.246254       1 factory.go:585] pod is already present in the activeQ
	E0213 22:59:14.270938       1 factory.go:585] pod is already present in the activeQ
	I0213 23:09:36.978473       1 serving.go:319] Generated self-signed cert in-memory
	W0213 23:09:42.494730       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0213 23:09:42.494980       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:09:42.495282       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0213 23:09:42.498329       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0213 23:09:42.531608       1 server.go:143] Version: v1.16.0
	I0213 23:09:42.531754       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0213 23:09:42.533913       1 authorization.go:47] Authorization is disabled
	W0213 23:09:42.533959       1 authentication.go:79] Authentication is disabled
	I0213 23:09:42.533973       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0213 23:09:42.534468       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:09:02 UTC, ends at Tue 2024-02-13 23:27:59 UTC. --
	Feb 13 23:23:46 old-k8s-version-245122 kubelet[1023]: E0213 23:23:46.287491    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:23:58 old-k8s-version-245122 kubelet[1023]: E0213 23:23:58.285370    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:24:13 old-k8s-version-245122 kubelet[1023]: E0213 23:24:13.285392    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:24:28 old-k8s-version-245122 kubelet[1023]: E0213 23:24:28.297945    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:24:34 old-k8s-version-245122 kubelet[1023]: E0213 23:24:34.366155    1023 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Feb 13 23:24:40 old-k8s-version-245122 kubelet[1023]: E0213 23:24:40.285500    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:24:51 old-k8s-version-245122 kubelet[1023]: E0213 23:24:51.285405    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:25:05 old-k8s-version-245122 kubelet[1023]: E0213 23:25:05.285396    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:25:16 old-k8s-version-245122 kubelet[1023]: E0213 23:25:16.285193    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:25:27 old-k8s-version-245122 kubelet[1023]: E0213 23:25:27.285294    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:25:40 old-k8s-version-245122 kubelet[1023]: E0213 23:25:40.285284    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:25:52 old-k8s-version-245122 kubelet[1023]: E0213 23:25:52.285295    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:26:05 old-k8s-version-245122 kubelet[1023]: E0213 23:26:05.304798    1023 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:26:05 old-k8s-version-245122 kubelet[1023]: E0213 23:26:05.304899    1023 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:26:05 old-k8s-version-245122 kubelet[1023]: E0213 23:26:05.304956    1023 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:26:05 old-k8s-version-245122 kubelet[1023]: E0213 23:26:05.304984    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Feb 13 23:26:20 old-k8s-version-245122 kubelet[1023]: E0213 23:26:20.286638    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:26:31 old-k8s-version-245122 kubelet[1023]: E0213 23:26:31.285746    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:26:44 old-k8s-version-245122 kubelet[1023]: E0213 23:26:44.287870    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:26:56 old-k8s-version-245122 kubelet[1023]: E0213 23:26:56.285320    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:27:07 old-k8s-version-245122 kubelet[1023]: E0213 23:27:07.285240    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:27:18 old-k8s-version-245122 kubelet[1023]: E0213 23:27:18.285521    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:27:29 old-k8s-version-245122 kubelet[1023]: E0213 23:27:29.285949    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:27:43 old-k8s-version-245122 kubelet[1023]: E0213 23:27:43.284964    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Feb 13 23:27:57 old-k8s-version-245122 kubelet[1023]: E0213 23:27:57.285497    1023 pod_workers.go:191] Error syncing pod cfb3f364-5eee-45a0-bd22-88d1efaefee3 ("metrics-server-74d5856cc6-c6rp6_kube-system(cfb3f364-5eee-45a0-bd22-88d1efaefee3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [9609117f701bf06b3b65bacbc3c345bb176f9c2bffbf02f562491be910248df1] <==
	I0213 22:59:17.311750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 22:59:17.322294       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 22:59:17.322561       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 22:59:17.340576       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 22:59:17.340969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_678c404a-331b-49da-b95a-0e8b1412d720!
	I0213 22:59:17.347786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cace87c9-89a0-466f-97f9-38c9b9e6c48b", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245122_678c404a-331b-49da-b95a-0e8b1412d720 became leader
	I0213 22:59:17.445007       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_678c404a-331b-49da-b95a-0e8b1412d720!
	I0213 23:09:46.905463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0213 23:10:16.907840       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ab470e6a37debae3ebd7efc0ec1d5571940c534ba94352e0ff64cda273c21249] <==
	I0213 23:10:17.724786       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:10:17.733589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:10:17.733811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:10:35.149958       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:10:35.151160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_e59d8682-98ba-4c99-88c8-67b46d0ef0c1!
	I0213 23:10:35.152596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cace87c9-89a0-466f-97f9-38c9b9e6c48b", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-245122_e59d8682-98ba-4c99-88c8-67b46d0ef0c1 became leader
	I0213 23:10:35.251789       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-245122_e59d8682-98ba-4c99-88c8-67b46d0ef0c1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-245122 -n old-k8s-version-245122
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-245122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-c6rp6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-245122 describe pod metrics-server-74d5856cc6-c6rp6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-245122 describe pod metrics-server-74d5856cc6-c6rp6: exit status 1 (68.056412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-c6rp6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-245122 describe pod metrics-server-74d5856cc6-c6rp6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (511.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (160.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-340656 -n embed-certs-340656
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:29:48.175370574 +0000 UTC m=+5607.790144484
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-340656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-340656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.591µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-340656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-340656 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-340656 logs -n 25: (1.454864912s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:28 UTC |
	| start   | -p newest-cni-120411 --memory=2200 --alsologtostderr   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:28 UTC |
	| start   | -p auto-397221 --memory=3072                           | auto-397221                  | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-120411             | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-120411                                   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-120411                  | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-120411 --memory=2200 --alsologtostderr   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:29:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:29:09.847536   55752 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:29:09.847854   55752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:29:09.847865   55752 out.go:304] Setting ErrFile to fd 2...
	I0213 23:29:09.847870   55752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:29:09.848057   55752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:29:09.848650   55752 out.go:298] Setting JSON to false
	I0213 23:29:09.849586   55752 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7901,"bootTime":1707859049,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:29:09.849646   55752 start.go:138] virtualization: kvm guest
	I0213 23:29:09.852170   55752 out.go:177] * [newest-cni-120411] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:29:09.853650   55752 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:29:09.855093   55752 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:29:09.853715   55752 notify.go:220] Checking for updates...
	I0213 23:29:09.857558   55752 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:29:09.859212   55752 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:29:09.860664   55752 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:29:09.862350   55752 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:29:09.864045   55752 config.go:182] Loaded profile config "newest-cni-120411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:29:09.864557   55752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:09.864624   55752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:09.879064   55752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40987
	I0213 23:29:09.879479   55752 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:09.880292   55752 main.go:141] libmachine: Using API Version  1
	I0213 23:29:09.880314   55752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:09.880703   55752 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:09.880936   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:09.881207   55752 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:29:09.881626   55752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:09.881682   55752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:09.897418   55752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
	I0213 23:29:09.897844   55752 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:09.898410   55752 main.go:141] libmachine: Using API Version  1
	I0213 23:29:09.898444   55752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:09.898898   55752 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:09.899113   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:09.940741   55752 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 23:29:09.942075   55752 start.go:298] selected driver: kvm2
	I0213 23:29:09.942100   55752 start.go:902] validating driver "kvm2" against &{Name:newest-cni-120411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node
_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:29:09.942230   55752 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:29:09.943034   55752 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:29:09.943119   55752 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:29:09.959590   55752 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:29:09.959986   55752 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 23:29:09.960046   55752 cni.go:84] Creating CNI manager for ""
	I0213 23:29:09.960061   55752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:29:09.960087   55752 start_flags.go:321] config:
	{Name:newest-cni-120411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:29:09.960269   55752 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:29:09.962242   55752 out.go:177] * Starting control plane node newest-cni-120411 in cluster newest-cni-120411
	I0213 23:29:05.950845   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:05.951307   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has current primary IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:05.951360   55355 main.go:141] libmachine: (auto-397221) Found IP for machine: 192.168.72.8
	I0213 23:29:05.951403   55355 main.go:141] libmachine: (auto-397221) Reserving static IP address...
	I0213 23:29:05.951729   55355 main.go:141] libmachine: (auto-397221) DBG | unable to find host DHCP lease matching {name: "auto-397221", mac: "52:54:00:e4:4b:d3", ip: "192.168.72.8"} in network mk-auto-397221
	I0213 23:29:06.036027   55355 main.go:141] libmachine: (auto-397221) DBG | Getting to WaitForSSH function...
	I0213 23:29:06.036058   55355 main.go:141] libmachine: (auto-397221) Reserved static IP address: 192.168.72.8
	I0213 23:29:06.036074   55355 main.go:141] libmachine: (auto-397221) Waiting for SSH to be available...
	I0213 23:29:06.039245   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:06.039642   55355 main.go:141] libmachine: (auto-397221) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221
	I0213 23:29:06.039675   55355 main.go:141] libmachine: (auto-397221) DBG | unable to find defined IP address of network mk-auto-397221 interface with MAC address 52:54:00:e4:4b:d3
	I0213 23:29:06.039831   55355 main.go:141] libmachine: (auto-397221) DBG | Using SSH client type: external
	I0213 23:29:06.039874   55355 main.go:141] libmachine: (auto-397221) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa (-rw-------)
	I0213 23:29:06.039907   55355 main.go:141] libmachine: (auto-397221) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:29:06.039936   55355 main.go:141] libmachine: (auto-397221) DBG | About to run SSH command:
	I0213 23:29:06.039953   55355 main.go:141] libmachine: (auto-397221) DBG | exit 0
	I0213 23:29:06.044767   55355 main.go:141] libmachine: (auto-397221) DBG | SSH cmd err, output: exit status 255: 
	I0213 23:29:06.044801   55355 main.go:141] libmachine: (auto-397221) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0213 23:29:06.044813   55355 main.go:141] libmachine: (auto-397221) DBG | command : exit 0
	I0213 23:29:06.044824   55355 main.go:141] libmachine: (auto-397221) DBG | err     : exit status 255
	I0213 23:29:06.044837   55355 main.go:141] libmachine: (auto-397221) DBG | output  : 
	I0213 23:29:09.046957   55355 main.go:141] libmachine: (auto-397221) DBG | Getting to WaitForSSH function...
	I0213 23:29:09.049179   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.049514   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.049539   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.049649   55355 main.go:141] libmachine: (auto-397221) DBG | Using SSH client type: external
	I0213 23:29:09.049663   55355 main.go:141] libmachine: (auto-397221) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa (-rw-------)
	I0213 23:29:09.049701   55355 main.go:141] libmachine: (auto-397221) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:29:09.049716   55355 main.go:141] libmachine: (auto-397221) DBG | About to run SSH command:
	I0213 23:29:09.049731   55355 main.go:141] libmachine: (auto-397221) DBG | exit 0
	I0213 23:29:09.138171   55355 main.go:141] libmachine: (auto-397221) DBG | SSH cmd err, output: <nil>: 
	I0213 23:29:09.138468   55355 main.go:141] libmachine: (auto-397221) KVM machine creation complete!
	I0213 23:29:09.138915   55355 main.go:141] libmachine: (auto-397221) Calling .GetConfigRaw
	I0213 23:29:09.139437   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:09.139643   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:09.139855   55355 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 23:29:09.139872   55355 main.go:141] libmachine: (auto-397221) Calling .GetState
	I0213 23:29:09.141531   55355 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 23:29:09.141547   55355 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 23:29:09.141552   55355 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 23:29:09.141558   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:09.144507   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.144950   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.144977   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.145182   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:09.145408   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.145572   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.145733   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:09.145918   55355 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:09.146246   55355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I0213 23:29:09.146259   55355 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 23:29:09.261941   55355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:29:09.261965   55355 main.go:141] libmachine: Detecting the provisioner...
	I0213 23:29:09.261973   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:09.265331   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.265793   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.265828   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.266104   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:09.266317   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.266489   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.266678   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:09.266854   55355 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:09.267185   55355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I0213 23:29:09.267197   55355 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 23:29:09.387510   55355 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 23:29:09.387628   55355 main.go:141] libmachine: found compatible host: buildroot
	I0213 23:29:09.387646   55355 main.go:141] libmachine: Provisioning with buildroot...
	I0213 23:29:09.387655   55355 main.go:141] libmachine: (auto-397221) Calling .GetMachineName
	I0213 23:29:09.387919   55355 buildroot.go:166] provisioning hostname "auto-397221"
	I0213 23:29:09.387953   55355 main.go:141] libmachine: (auto-397221) Calling .GetMachineName
	I0213 23:29:09.388180   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:09.391727   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.392157   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.392188   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.392370   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:09.392585   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.392769   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.392897   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:09.393096   55355 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:09.393570   55355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I0213 23:29:09.393591   55355 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-397221 && echo "auto-397221" | sudo tee /etc/hostname
	I0213 23:29:09.532319   55355 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-397221
	
	I0213 23:29:09.532352   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:09.535629   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.536079   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.536112   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.536309   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:09.536519   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.536698   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.536887   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:09.537077   55355 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:09.537473   55355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I0213 23:29:09.537497   55355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-397221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-397221/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-397221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:29:09.672175   55355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:29:09.672211   55355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:29:09.672230   55355 buildroot.go:174] setting up certificates
	I0213 23:29:09.672247   55355 provision.go:83] configureAuth start
	I0213 23:29:09.672258   55355 main.go:141] libmachine: (auto-397221) Calling .GetMachineName
	I0213 23:29:09.672579   55355 main.go:141] libmachine: (auto-397221) Calling .GetIP
	I0213 23:29:09.675470   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.675842   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.675867   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.676082   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:09.679055   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.679388   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.679419   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.679682   55355 provision.go:138] copyHostCerts
	I0213 23:29:09.679764   55355 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:29:09.679785   55355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:29:09.679873   55355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:29:09.679992   55355 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:29:09.680003   55355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:29:09.680037   55355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:29:09.680106   55355 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:29:09.680118   55355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:29:09.680151   55355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:29:09.680215   55355 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.auto-397221 san=[192.168.72.8 192.168.72.8 localhost 127.0.0.1 minikube auto-397221]
	I0213 23:29:09.969834   55355 provision.go:172] copyRemoteCerts
	I0213 23:29:09.969950   55355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:29:09.969986   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:09.973197   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.973595   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:09.973615   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:09.973797   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:09.974067   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:09.974243   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:09.974437   55355 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa Username:docker}
	I0213 23:29:10.064567   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:29:10.094347   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:29:10.120556   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0213 23:29:10.146701   55355 provision.go:86] duration metric: configureAuth took 474.430057ms
	I0213 23:29:10.146736   55355 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:29:10.146957   55355 config.go:182] Loaded profile config "auto-397221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:29:10.147072   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:10.150448   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.150856   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.150899   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.151208   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:10.151449   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.151636   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.151825   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:10.151993   55355 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:10.152317   55355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I0213 23:29:10.152346   55355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:29:09.963630   55752 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:29:09.963687   55752 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0213 23:29:09.963699   55752 cache.go:56] Caching tarball of preloaded images
	I0213 23:29:09.963841   55752 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:29:09.963868   55752 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0213 23:29:09.963982   55752 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/config.json ...
	I0213 23:29:09.964262   55752 start.go:365] acquiring machines lock for newest-cni-120411: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:29:10.755124   55752 start.go:369] acquired machines lock for "newest-cni-120411" in 790.81408ms
	I0213 23:29:10.755188   55752 start.go:96] Skipping create...Using existing machine configuration
	I0213 23:29:10.755201   55752 fix.go:54] fixHost starting: 
	I0213 23:29:10.755621   55752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:10.755675   55752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:10.775769   55752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0213 23:29:10.776209   55752 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:10.776780   55752 main.go:141] libmachine: Using API Version  1
	I0213 23:29:10.776809   55752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:10.777224   55752 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:10.777438   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:10.777614   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetState
	I0213 23:29:10.779497   55752 fix.go:102] recreateIfNeeded on newest-cni-120411: state=Stopped err=<nil>
	I0213 23:29:10.779541   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	W0213 23:29:10.779753   55752 fix.go:128] unexpected machine state, will restart: <nil>
	I0213 23:29:10.782495   55752 out.go:177] * Restarting existing kvm2 VM for "newest-cni-120411" ...
	I0213 23:29:10.488593   55355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:29:10.488621   55355 main.go:141] libmachine: Checking connection to Docker...
	I0213 23:29:10.488629   55355 main.go:141] libmachine: (auto-397221) Calling .GetURL
	I0213 23:29:10.490141   55355 main.go:141] libmachine: (auto-397221) DBG | Using libvirt version 6000000
	I0213 23:29:10.492595   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.492941   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.492991   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.493135   55355 main.go:141] libmachine: Docker is up and running!
	I0213 23:29:10.493154   55355 main.go:141] libmachine: Reticulating splines...
	I0213 23:29:10.493162   55355 client.go:171] LocalClient.Create took 30.077370011s
	I0213 23:29:10.493182   55355 start.go:167] duration metric: libmachine.API.Create for "auto-397221" took 30.077449045s
	I0213 23:29:10.493191   55355 start.go:300] post-start starting for "auto-397221" (driver="kvm2")
	I0213 23:29:10.493200   55355 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:29:10.493216   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:10.493477   55355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:29:10.493503   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:10.495765   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.496136   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.496163   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.496286   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:10.496510   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.496665   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:10.496818   55355 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa Username:docker}
	I0213 23:29:10.589907   55355 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:29:10.594770   55355 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:29:10.594804   55355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:29:10.594860   55355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:29:10.594931   55355 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:29:10.595011   55355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:29:10.604845   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:29:10.628760   55355 start.go:303] post-start completed in 135.555077ms
	I0213 23:29:10.628810   55355 main.go:141] libmachine: (auto-397221) Calling .GetConfigRaw
	I0213 23:29:10.629530   55355 main.go:141] libmachine: (auto-397221) Calling .GetIP
	I0213 23:29:10.632447   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.632869   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.632898   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.633126   55355 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/config.json ...
	I0213 23:29:10.633311   55355 start.go:128] duration metric: createHost completed in 30.237306857s
	I0213 23:29:10.633337   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:10.635692   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.636068   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.636089   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.636261   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:10.636427   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.636598   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.636771   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:10.636907   55355 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:10.637214   55355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I0213 23:29:10.637231   55355 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:29:10.754965   55355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707866950.742800185
	
	I0213 23:29:10.754986   55355 fix.go:206] guest clock: 1707866950.742800185
	I0213 23:29:10.754995   55355 fix.go:219] Guest: 2024-02-13 23:29:10.742800185 +0000 UTC Remote: 2024-02-13 23:29:10.633322967 +0000 UTC m=+30.362722185 (delta=109.477218ms)
	I0213 23:29:10.755034   55355 fix.go:190] guest clock delta is within tolerance: 109.477218ms
	I0213 23:29:10.755040   55355 start.go:83] releasing machines lock for "auto-397221", held for 30.359138779s
	I0213 23:29:10.755066   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:10.755352   55355 main.go:141] libmachine: (auto-397221) Calling .GetIP
	I0213 23:29:10.758509   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.758929   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.758963   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.759149   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:10.759732   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:10.759973   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:10.760101   55355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:29:10.760148   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:10.760190   55355 ssh_runner.go:195] Run: cat /version.json
	I0213 23:29:10.760212   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:10.763034   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.763336   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.763413   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.763448   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.763638   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:10.763844   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.763946   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:10.764010   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:10.764012   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:10.764196   55355 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa Username:docker}
	I0213 23:29:10.764265   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:10.764421   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:10.764573   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:10.764719   55355 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa Username:docker}
	I0213 23:29:10.882375   55355 ssh_runner.go:195] Run: systemctl --version
	I0213 23:29:10.889027   55355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:29:11.049467   55355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:29:11.055843   55355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:29:11.055923   55355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:29:11.070813   55355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:29:11.070839   55355 start.go:475] detecting cgroup driver to use...
	I0213 23:29:11.070904   55355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:29:11.090084   55355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:29:11.104224   55355 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:29:11.104285   55355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:29:11.118540   55355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:29:11.132766   55355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:29:11.247285   55355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:29:11.377478   55355 docker.go:233] disabling docker service ...
	I0213 23:29:11.377554   55355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:29:11.393577   55355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:29:11.405340   55355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:29:11.538505   55355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:29:11.664534   55355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:29:11.678329   55355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:29:11.696389   55355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:29:11.696459   55355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:11.705821   55355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:29:11.705906   55355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:11.716010   55355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:11.725750   55355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:11.736451   55355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:29:11.749906   55355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:29:11.761921   55355 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:29:11.761989   55355 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:29:11.778154   55355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:29:11.788037   55355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:29:11.905194   55355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:29:12.112469   55355 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:29:12.112566   55355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:29:12.121378   55355 start.go:543] Will wait 60s for crictl version
	I0213 23:29:12.121450   55355 ssh_runner.go:195] Run: which crictl
	I0213 23:29:12.127508   55355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:29:12.175209   55355 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:29:12.175303   55355 ssh_runner.go:195] Run: crio --version
	I0213 23:29:12.226433   55355 ssh_runner.go:195] Run: crio --version
	I0213 23:29:12.282289   55355 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:29:10.783845   55752 main.go:141] libmachine: (newest-cni-120411) Calling .Start
	I0213 23:29:10.784048   55752 main.go:141] libmachine: (newest-cni-120411) Ensuring networks are active...
	I0213 23:29:10.784852   55752 main.go:141] libmachine: (newest-cni-120411) Ensuring network default is active
	I0213 23:29:10.785235   55752 main.go:141] libmachine: (newest-cni-120411) Ensuring network mk-newest-cni-120411 is active
	I0213 23:29:10.785701   55752 main.go:141] libmachine: (newest-cni-120411) Getting domain xml...
	I0213 23:29:10.786553   55752 main.go:141] libmachine: (newest-cni-120411) Creating domain...
	I0213 23:29:12.096910   55752 main.go:141] libmachine: (newest-cni-120411) Waiting to get IP...
	I0213 23:29:12.097912   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:12.098572   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:12.098631   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:12.098532   55787 retry.go:31] will retry after 243.603676ms: waiting for machine to come up
	I0213 23:29:12.344151   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:12.344702   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:12.344736   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:12.344641   55787 retry.go:31] will retry after 322.137176ms: waiting for machine to come up
	I0213 23:29:12.668517   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:12.669223   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:12.669246   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:12.669169   55787 retry.go:31] will retry after 439.495392ms: waiting for machine to come up
	I0213 23:29:13.110874   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:13.111357   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:13.111388   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:13.111323   55787 retry.go:31] will retry after 436.951823ms: waiting for machine to come up
	I0213 23:29:13.550028   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:13.550507   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:13.550557   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:13.550487   55787 retry.go:31] will retry after 701.324443ms: waiting for machine to come up
	I0213 23:29:14.253380   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:14.253938   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:14.253981   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:14.253847   55787 retry.go:31] will retry after 811.126698ms: waiting for machine to come up
	I0213 23:29:12.283567   55355 main.go:141] libmachine: (auto-397221) Calling .GetIP
	I0213 23:29:12.286972   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:12.287342   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:12.287370   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:12.287612   55355 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0213 23:29:12.292478   55355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:29:12.307411   55355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:29:12.307465   55355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:29:12.347934   55355 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:29:12.348014   55355 ssh_runner.go:195] Run: which lz4
	I0213 23:29:12.352693   55355 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:29:12.357310   55355 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:29:12.357336   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:29:14.226513   55355 crio.go:444] Took 1.873863 seconds to copy over tarball
	I0213 23:29:14.226594   55355 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:29:15.066603   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:15.067112   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:15.067143   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:15.067074   55787 retry.go:31] will retry after 938.949077ms: waiting for machine to come up
	I0213 23:29:16.007562   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:16.008081   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:16.008114   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:16.008010   55787 retry.go:31] will retry after 1.025271498s: waiting for machine to come up
	I0213 23:29:17.035170   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:17.035648   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:17.035679   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:17.035597   55787 retry.go:31] will retry after 1.31828217s: waiting for machine to come up
	I0213 23:29:18.354998   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:18.355572   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:18.355600   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:18.355518   55787 retry.go:31] will retry after 1.424778415s: waiting for machine to come up
	I0213 23:29:19.782051   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:19.782592   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:19.782629   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:19.782536   55787 retry.go:31] will retry after 2.34031705s: waiting for machine to come up
	I0213 23:29:17.637715   55355 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.411092492s)
	I0213 23:29:17.637753   55355 crio.go:451] Took 3.411209 seconds to extract the tarball
	I0213 23:29:17.637771   55355 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:29:17.683141   55355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:29:17.762229   55355 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:29:17.762258   55355 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:29:17.762344   55355 ssh_runner.go:195] Run: crio config
	I0213 23:29:17.832794   55355 cni.go:84] Creating CNI manager for ""
	I0213 23:29:17.832823   55355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:29:17.832846   55355 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:29:17.832865   55355 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-397221 NodeName:auto-397221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:29:17.832984   55355 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-397221"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:29:17.833051   55355 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-397221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-397221 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:29:17.833102   55355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:29:17.842687   55355 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:29:17.842770   55355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:29:17.854285   55355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I0213 23:29:17.873066   55355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:29:17.891491   55355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0213 23:29:17.911596   55355 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I0213 23:29:17.916760   55355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:29:17.931055   55355 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221 for IP: 192.168.72.8
	I0213 23:29:17.931091   55355 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:17.931236   55355 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:29:17.931280   55355 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:29:17.931324   55355 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/client.key
	I0213 23:29:17.931341   55355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/client.crt with IP's: []
	I0213 23:29:18.370709   55355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/client.crt ...
	I0213 23:29:18.370739   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/client.crt: {Name:mk8a5eed82d0172da650af671bce79c41c6ae8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:18.370906   55355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/client.key ...
	I0213 23:29:18.370926   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/client.key: {Name:mk1006660cfd857bdcd75384d34772ed87a6a5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:18.370997   55355 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.key.d50996cd
	I0213 23:29:18.371010   55355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.crt.d50996cd with IP's: [192.168.72.8 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 23:29:18.504301   55355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.crt.d50996cd ...
	I0213 23:29:18.504342   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.crt.d50996cd: {Name:mkc60a974fb4a761c69fdf9d41591416535f9314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:18.504523   55355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.key.d50996cd ...
	I0213 23:29:18.504540   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.key.d50996cd: {Name:mkb068ec2ad42fc070150f163127ebc15238c6e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:18.504642   55355 certs.go:337] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.crt.d50996cd -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.crt
	I0213 23:29:18.504758   55355 certs.go:341] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.key.d50996cd -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.key
	I0213 23:29:18.504832   55355 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.key
	I0213 23:29:18.504851   55355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.crt with IP's: []
	I0213 23:29:18.589172   55355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.crt ...
	I0213 23:29:18.589204   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.crt: {Name:mk154afca8ac9ca561227087d4bf310c3494269b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:18.589362   55355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.key ...
	I0213 23:29:18.589373   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.key: {Name:mka4470f15a64cfc8672ac4a8f83748db1317ce7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:18.589535   55355 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:29:18.589569   55355 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:29:18.589576   55355 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:29:18.589599   55355 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:29:18.589623   55355 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:29:18.589645   55355 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:29:18.589680   55355 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:29:18.590385   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:29:18.619627   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:29:18.646306   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:29:18.672010   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/auto-397221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 23:29:18.698817   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:29:18.725969   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:29:18.753340   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:29:18.782891   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:29:18.808987   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:29:18.839019   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:29:18.865309   55355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:29:18.895916   55355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:29:18.917816   55355 ssh_runner.go:195] Run: openssl version
	I0213 23:29:18.924697   55355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:29:18.935898   55355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:29:18.941519   55355 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:29:18.941623   55355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:29:18.948043   55355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:29:18.959946   55355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:29:18.971303   55355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:29:18.977253   55355 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:29:18.977351   55355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:29:18.983576   55355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:29:18.995368   55355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:29:19.007869   55355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:29:19.014518   55355 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:29:19.014578   55355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:29:19.022474   55355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:29:19.034846   55355 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:29:19.041109   55355 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 23:29:19.041162   55355 kubeadm.go:404] StartCluster: {Name:auto-397221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:auto-397221 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:29:19.041249   55355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:29:19.041326   55355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:29:19.089321   55355 cri.go:89] found id: ""
	I0213 23:29:19.089397   55355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:29:19.100411   55355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:29:19.111225   55355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:29:19.121674   55355 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:29:19.121733   55355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:29:19.353465   55355 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:29:22.125126   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:22.125582   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:22.125612   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:22.125534   55787 retry.go:31] will retry after 2.598874004s: waiting for machine to come up
	I0213 23:29:24.727358   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:24.727861   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:24.727890   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:24.727784   55787 retry.go:31] will retry after 2.862578604s: waiting for machine to come up
	I0213 23:29:27.592137   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:27.592642   55752 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:29:27.592669   55752 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:29:27.592577   55787 retry.go:31] will retry after 3.75022664s: waiting for machine to come up
	I0213 23:29:32.089105   55355 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:29:32.089198   55355 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:29:32.089299   55355 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:29:32.089397   55355 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:29:32.089501   55355 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:29:32.089557   55355 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:29:32.091241   55355 out.go:204]   - Generating certificates and keys ...
	I0213 23:29:32.091349   55355 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:29:32.091438   55355 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:29:32.091559   55355 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 23:29:32.091650   55355 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 23:29:32.091726   55355 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 23:29:32.091865   55355 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 23:29:32.091952   55355 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 23:29:32.092113   55355 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-397221 localhost] and IPs [192.168.72.8 127.0.0.1 ::1]
	I0213 23:29:32.092182   55355 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 23:29:32.092346   55355 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-397221 localhost] and IPs [192.168.72.8 127.0.0.1 ::1]
	I0213 23:29:32.092445   55355 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 23:29:32.092516   55355 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 23:29:32.092579   55355 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 23:29:32.092656   55355 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:29:32.092730   55355 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:29:32.092815   55355 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:29:32.092915   55355 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:29:32.092988   55355 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:29:32.093081   55355 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:29:32.093161   55355 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:29:32.094767   55355 out.go:204]   - Booting up control plane ...
	I0213 23:29:32.094890   55355 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:29:32.094999   55355 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:29:32.095101   55355 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:29:32.095235   55355 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:29:32.095381   55355 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:29:32.095453   55355 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:29:32.095661   55355 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:29:32.095757   55355 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504905 seconds
	I0213 23:29:32.095916   55355 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:29:32.096056   55355 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:29:32.096134   55355 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:29:32.096354   55355 kubeadm.go:322] [mark-control-plane] Marking the node auto-397221 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:29:32.096447   55355 kubeadm.go:322] [bootstrap-token] Using token: 902gq6.pc7rb2spksioigbj
	I0213 23:29:32.097990   55355 out.go:204]   - Configuring RBAC rules ...
	I0213 23:29:32.098151   55355 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:29:32.098289   55355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:29:32.098465   55355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:29:32.098646   55355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:29:32.098807   55355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:29:32.098917   55355 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:29:32.099065   55355 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:29:32.099165   55355 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:29:32.099228   55355 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:29:32.099247   55355 kubeadm.go:322] 
	I0213 23:29:32.099323   55355 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:29:32.099333   55355 kubeadm.go:322] 
	I0213 23:29:32.099419   55355 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:29:32.099429   55355 kubeadm.go:322] 
	I0213 23:29:32.099453   55355 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:29:32.099532   55355 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:29:32.099602   55355 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:29:32.099612   55355 kubeadm.go:322] 
	I0213 23:29:32.099679   55355 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:29:32.099689   55355 kubeadm.go:322] 
	I0213 23:29:32.099759   55355 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:29:32.099769   55355 kubeadm.go:322] 
	I0213 23:29:32.099838   55355 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:29:32.099926   55355 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:29:32.100010   55355 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:29:32.100029   55355 kubeadm.go:322] 
	I0213 23:29:32.100107   55355 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:29:32.100177   55355 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:29:32.100183   55355 kubeadm.go:322] 
	I0213 23:29:32.100249   55355 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 902gq6.pc7rb2spksioigbj \
	I0213 23:29:32.100345   55355 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:29:32.100369   55355 kubeadm.go:322] 	--control-plane 
	I0213 23:29:32.100378   55355 kubeadm.go:322] 
	I0213 23:29:32.100475   55355 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:29:32.100487   55355 kubeadm.go:322] 
	I0213 23:29:32.100577   55355 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 902gq6.pc7rb2spksioigbj \
	I0213 23:29:32.100673   55355 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:29:32.100684   55355 cni.go:84] Creating CNI manager for ""
	I0213 23:29:32.100691   55355 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:29:32.102795   55355 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0213 23:29:31.343979   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.344469   55752 main.go:141] libmachine: (newest-cni-120411) Found IP for machine: 192.168.50.143
	I0213 23:29:31.344514   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has current primary IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.344525   55752 main.go:141] libmachine: (newest-cni-120411) Reserving static IP address...
	I0213 23:29:31.344894   55752 main.go:141] libmachine: (newest-cni-120411) Reserved static IP address: 192.168.50.143
	I0213 23:29:31.344919   55752 main.go:141] libmachine: (newest-cni-120411) Waiting for SSH to be available...
	I0213 23:29:31.344942   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "newest-cni-120411", mac: "52:54:00:e5:49:c2", ip: "192.168.50.143"} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.344965   55752 main.go:141] libmachine: (newest-cni-120411) DBG | skip adding static IP to network mk-newest-cni-120411 - found existing host DHCP lease matching {name: "newest-cni-120411", mac: "52:54:00:e5:49:c2", ip: "192.168.50.143"}
	I0213 23:29:31.344984   55752 main.go:141] libmachine: (newest-cni-120411) DBG | Getting to WaitForSSH function...
	I0213 23:29:31.347060   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.347528   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.347555   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.347779   55752 main.go:141] libmachine: (newest-cni-120411) DBG | Using SSH client type: external
	I0213 23:29:31.347803   55752 main.go:141] libmachine: (newest-cni-120411) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa (-rw-------)
	I0213 23:29:31.347823   55752 main.go:141] libmachine: (newest-cni-120411) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:29:31.347831   55752 main.go:141] libmachine: (newest-cni-120411) DBG | About to run SSH command:
	I0213 23:29:31.347846   55752 main.go:141] libmachine: (newest-cni-120411) DBG | exit 0
	I0213 23:29:31.446434   55752 main.go:141] libmachine: (newest-cni-120411) DBG | SSH cmd err, output: <nil>: 
	I0213 23:29:31.446885   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetConfigRaw
	I0213 23:29:31.447603   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:29:31.450676   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.451169   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.451227   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.451479   55752 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/config.json ...
	I0213 23:29:31.451731   55752 machine.go:88] provisioning docker machine ...
	I0213 23:29:31.451755   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:31.452017   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:29:31.452231   55752 buildroot.go:166] provisioning hostname "newest-cni-120411"
	I0213 23:29:31.452253   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:29:31.452404   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:31.454886   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.455261   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.455291   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.455465   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:31.455658   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:31.455848   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:31.455988   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:31.456150   55752 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:31.456652   55752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:29:31.456675   55752 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-120411 && echo "newest-cni-120411" | sudo tee /etc/hostname
	I0213 23:29:31.605893   55752 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-120411
	
	I0213 23:29:31.605938   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:31.609166   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.609557   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.609586   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.609847   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:31.610104   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:31.610312   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:31.610474   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:31.610681   55752 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:31.611022   55752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:29:31.611043   55752 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120411/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:29:31.760612   55752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:29:31.760650   55752 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:29:31.760680   55752 buildroot.go:174] setting up certificates
	I0213 23:29:31.760696   55752 provision.go:83] configureAuth start
	I0213 23:29:31.760713   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:29:31.760988   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:29:31.764132   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.764546   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.764577   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.764721   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:31.767207   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.767606   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:31.767637   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:31.767824   55752 provision.go:138] copyHostCerts
	I0213 23:29:31.767888   55752 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:29:31.767910   55752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:29:31.767988   55752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:29:31.768154   55752 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:29:31.768167   55752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:29:31.768199   55752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:29:31.768267   55752 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:29:31.768277   55752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:29:31.768298   55752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:29:31.768346   55752 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120411 san=[192.168.50.143 192.168.50.143 localhost 127.0.0.1 minikube newest-cni-120411]
	I0213 23:29:32.059523   55752 provision.go:172] copyRemoteCerts
	I0213 23:29:32.059593   55752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:29:32.059617   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:32.063390   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.063856   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.063894   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.064162   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:32.064392   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.064571   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:32.064751   55752 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:29:32.167926   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:29:32.199856   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:29:32.227258   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:29:32.256449   55752 provision.go:86] duration metric: configureAuth took 495.733316ms
	I0213 23:29:32.256482   55752 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:29:32.256700   55752 config.go:182] Loaded profile config "newest-cni-120411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:29:32.256792   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:32.260503   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.260958   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.261010   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.261342   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:32.261640   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.261864   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.262046   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:32.262304   55752 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:32.262818   55752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:29:32.262856   55752 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:29:32.663329   55752 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:29:32.663365   55752 machine.go:91] provisioned docker machine in 1.211617892s
	I0213 23:29:32.663398   55752 start.go:300] post-start starting for "newest-cni-120411" (driver="kvm2")
	I0213 23:29:32.663416   55752 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:29:32.663438   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:32.663831   55752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:29:32.663863   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:32.667096   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.667635   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.667664   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.667855   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:32.668077   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.668291   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:32.668484   55752 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:29:32.768349   55752 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:29:32.772908   55752 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:29:32.772936   55752 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:29:32.773001   55752 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:29:32.773075   55752 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:29:32.773164   55752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:29:32.782380   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:29:32.808200   55752 start.go:303] post-start completed in 144.784877ms
	I0213 23:29:32.808229   55752 fix.go:56] fixHost completed within 22.053028592s
	I0213 23:29:32.808255   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:32.811082   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.811630   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.811654   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.811883   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:32.812078   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.812257   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.812465   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:32.812653   55752 main.go:141] libmachine: Using SSH client type: native
	I0213 23:29:32.813107   55752 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:29:32.813126   55752 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:29:32.950613   55752 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707866972.895371939
	
	I0213 23:29:32.950640   55752 fix.go:206] guest clock: 1707866972.895371939
	I0213 23:29:32.950648   55752 fix.go:219] Guest: 2024-02-13 23:29:32.895371939 +0000 UTC Remote: 2024-02-13 23:29:32.808233678 +0000 UTC m=+23.017380393 (delta=87.138261ms)
	I0213 23:29:32.950666   55752 fix.go:190] guest clock delta is within tolerance: 87.138261ms
	I0213 23:29:32.950674   55752 start.go:83] releasing machines lock for "newest-cni-120411", held for 22.195515493s
	I0213 23:29:32.950702   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:32.950971   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:29:32.954100   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.954548   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.954579   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.954743   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:32.955339   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:32.955563   55752 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:29:32.955663   55752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:29:32.955708   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:32.956008   55752 ssh_runner.go:195] Run: cat /version.json
	I0213 23:29:32.956034   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:29:32.958965   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.959251   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.959325   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.959356   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.959540   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:32.959733   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.959878   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:32.959888   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:32.959910   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:32.960053   55752 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:29:32.960129   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:29:32.960304   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:29:32.960422   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:29:32.960532   55752 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:29:33.055603   55752 ssh_runner.go:195] Run: systemctl --version
	I0213 23:29:33.079744   55752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:29:33.234286   55752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:29:33.241748   55752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:29:33.241830   55752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:29:33.259109   55752 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:29:33.259140   55752 start.go:475] detecting cgroup driver to use...
	I0213 23:29:33.259212   55752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:29:33.274478   55752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:29:33.289271   55752 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:29:33.289332   55752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:29:33.305292   55752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:29:33.320235   55752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:29:33.433948   55752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:29:33.580914   55752 docker.go:233] disabling docker service ...
	I0213 23:29:33.580992   55752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:29:33.599689   55752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:29:33.614381   55752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:29:33.745995   55752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:29:33.876835   55752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:29:33.892673   55752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:29:33.912979   55752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:29:33.913053   55752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:33.924970   55752 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:29:33.925042   55752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:33.937158   55752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:33.948840   55752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:29:33.961398   55752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:29:33.974612   55752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:29:33.985323   55752 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:29:33.985415   55752 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:29:34.001681   55752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:29:34.012952   55752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:29:34.141094   55752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:29:34.319553   55752 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:29:34.319629   55752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:29:34.324618   55752 start.go:543] Will wait 60s for crictl version
	I0213 23:29:34.324688   55752 ssh_runner.go:195] Run: which crictl
	I0213 23:29:34.328617   55752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:29:34.370665   55752 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:29:34.370767   55752 ssh_runner.go:195] Run: crio --version
	I0213 23:29:34.420037   55752 ssh_runner.go:195] Run: crio --version
	I0213 23:29:34.472447   55752 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:29:34.473781   55752 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:29:34.476769   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:34.477194   55752 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:29:34.477222   55752 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:29:34.477483   55752 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:29:34.481781   55752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:29:34.497689   55752 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0213 23:29:34.499127   55752 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:29:34.499196   55752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:29:34.554018   55752 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:29:34.554111   55752 ssh_runner.go:195] Run: which lz4
	I0213 23:29:34.558590   55752 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:29:34.563312   55752 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:29:34.563355   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0213 23:29:32.104912   55355 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0213 23:29:32.120322   55355 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0213 23:29:32.170888   55355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0213 23:29:32.170964   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:32.170976   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e minikube.k8s.io/name=auto-397221 minikube.k8s.io/updated_at=2024_02_13T23_29_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:32.539108   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:32.623659   55355 ops.go:34] apiserver oom_adj: -16
	I0213 23:29:33.039729   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:33.540183   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:34.040133   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:34.539207   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:35.039237   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:36.269957   55752 crio.go:444] Took 1.711406 seconds to copy over tarball
	I0213 23:29:36.270040   55752 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:29:39.292628   55752 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.022552902s)
	I0213 23:29:39.292661   55752 crio.go:451] Took 3.022672 seconds to extract the tarball
	I0213 23:29:39.292673   55752 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:29:39.332184   55752 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:29:39.382558   55752 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:29:39.382589   55752 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:29:39.382666   55752 ssh_runner.go:195] Run: crio config
	I0213 23:29:39.447261   55752 cni.go:84] Creating CNI manager for ""
	I0213 23:29:39.447283   55752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:29:39.447299   55752 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0213 23:29:39.447325   55752 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.143 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120411 NodeName:newest-cni-120411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.50.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:29:39.447545   55752 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-120411"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:29:39.447643   55752 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-120411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:29:39.447707   55752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:29:39.456818   55752 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:29:39.456901   55752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:29:39.465646   55752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0213 23:29:39.482550   55752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:29:39.499733   55752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0213 23:29:39.519139   55752 ssh_runner.go:195] Run: grep 192.168.50.143	control-plane.minikube.internal$ /etc/hosts
	I0213 23:29:39.522941   55752 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:29:39.536302   55752 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411 for IP: 192.168.50.143
	I0213 23:29:39.536340   55752 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:39.536482   55752 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:29:39.536552   55752 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:29:39.536652   55752 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.key
	I0213 23:29:39.536746   55752 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key.3c7b2e1c
	I0213 23:29:39.536805   55752 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.key
	I0213 23:29:39.536940   55752 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:29:39.536974   55752 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:29:39.536984   55752 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:29:39.537008   55752 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:29:39.537035   55752 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:29:39.537057   55752 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:29:39.537093   55752 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:29:39.537672   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:29:39.573173   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:29:39.600186   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:29:39.632744   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 23:29:39.658406   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:29:39.684065   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:29:39.709137   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:29:39.734032   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:29:39.758965   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:29:39.783110   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:29:39.808824   55752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:29:39.837342   55752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:29:35.539668   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:36.040099   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:36.539926   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:37.039712   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:37.539403   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:38.039253   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:38.540075   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:39.039200   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:39.539663   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:40.039165   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:39.856643   55752 ssh_runner.go:195] Run: openssl version
	I0213 23:29:40.307492   55752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:29:40.319162   55752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:29:40.324747   55752 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:29:40.324841   55752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:29:40.330860   55752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:29:40.342755   55752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:29:40.355142   55752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:29:40.360272   55752 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:29:40.360343   55752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:29:40.366999   55752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:29:40.377848   55752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:29:40.388624   55752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:29:40.393608   55752 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:29:40.393669   55752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:29:40.399738   55752 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:29:40.412595   55752 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:29:40.418461   55752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0213 23:29:40.424654   55752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0213 23:29:40.431024   55752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0213 23:29:40.437106   55752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0213 23:29:40.443914   55752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0213 23:29:40.450017   55752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0213 23:29:40.455944   55752 kubeadm.go:404] StartCluster: {Name:newest-cni-120411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false syste
m_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:29:40.456056   55752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:29:40.456112   55752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:29:40.498280   55752 cri.go:89] found id: ""
	I0213 23:29:40.498367   55752 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:29:40.509743   55752 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0213 23:29:40.509771   55752 kubeadm.go:636] restartCluster start
	I0213 23:29:40.509855   55752 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0213 23:29:40.520077   55752 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:40.698220   55752 kubeconfig.go:135] verify returned: extract IP: "newest-cni-120411" does not appear in /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:29:40.698789   55752 kubeconfig.go:146] "newest-cni-120411" context is missing from /home/jenkins/minikube-integration/18171-8990/kubeconfig - will repair!
	I0213 23:29:40.699728   55752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:40.748555   55752 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0213 23:29:40.762531   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:40.762609   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:40.775972   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:41.263042   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:41.263132   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:41.279036   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:41.762584   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:41.762694   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:41.777122   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:42.262683   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:42.262767   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:42.275744   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:42.763338   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:42.763418   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:42.775829   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:43.263435   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:43.263546   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:43.276303   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:43.762829   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:43.762913   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:43.777481   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:44.262782   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:44.262916   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:44.275642   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:44.763256   55752 api_server.go:166] Checking apiserver status ...
	I0213 23:29:44.763341   55752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0213 23:29:44.776721   55752 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0213 23:29:40.539789   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:41.149901   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:41.539987   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:42.039160   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:42.539885   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:43.039264   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:43.540049   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:44.039844   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:44.539770   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:45.039599   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:45.540003   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:46.039293   55355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0213 23:29:46.180476   55355 kubeadm.go:1088] duration metric: took 14.009570659s to wait for elevateKubeSystemPrivileges.
	I0213 23:29:46.180511   55355 kubeadm.go:406] StartCluster complete in 27.139353214s
	I0213 23:29:46.180533   55355 settings.go:142] acquiring lock: {Name:mk90e096e2eb7d37beee7d8775855a9c9781bd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:46.180599   55355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:29:46.184666   55355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/kubeconfig: {Name:mk3c8171005f9136160cd163d0f3ba7866a504d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:29:46.185054   55355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0213 23:29:46.185228   55355 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0213 23:29:46.185310   55355 addons.go:69] Setting storage-provisioner=true in profile "auto-397221"
	I0213 23:29:46.185356   55355 addons.go:234] Setting addon storage-provisioner=true in "auto-397221"
	I0213 23:29:46.185265   55355 config.go:182] Loaded profile config "auto-397221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:29:46.185396   55355 addons.go:69] Setting default-storageclass=true in profile "auto-397221"
	I0213 23:29:46.185412   55355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-397221"
	I0213 23:29:46.185419   55355 host.go:66] Checking if "auto-397221" exists ...
	I0213 23:29:46.185962   55355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:46.185960   55355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:46.186014   55355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:46.186030   55355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:46.204306   55355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0213 23:29:46.204333   55355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0213 23:29:46.204785   55355 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:46.205051   55355 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:46.205362   55355 main.go:141] libmachine: Using API Version  1
	I0213 23:29:46.205391   55355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:46.205522   55355 main.go:141] libmachine: Using API Version  1
	I0213 23:29:46.205548   55355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:46.205857   55355 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:46.205928   55355 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:46.206107   55355 main.go:141] libmachine: (auto-397221) Calling .GetState
	I0213 23:29:46.206433   55355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:46.206486   55355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:46.209860   55355 addons.go:234] Setting addon default-storageclass=true in "auto-397221"
	I0213 23:29:46.209954   55355 host.go:66] Checking if "auto-397221" exists ...
	I0213 23:29:46.210358   55355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:46.210409   55355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:46.225033   55355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35157
	I0213 23:29:46.225471   55355 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:46.226048   55355 main.go:141] libmachine: Using API Version  1
	I0213 23:29:46.226078   55355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:46.226423   55355 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:46.226609   55355 main.go:141] libmachine: (auto-397221) Calling .GetState
	I0213 23:29:46.226761   55355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0213 23:29:46.227145   55355 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:46.227667   55355 main.go:141] libmachine: Using API Version  1
	I0213 23:29:46.227692   55355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:46.228074   55355 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:46.228699   55355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:29:46.228757   55355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:29:46.229026   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:46.231052   55355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0213 23:29:46.232661   55355 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:29:46.232681   55355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0213 23:29:46.232704   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:46.236569   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:46.236959   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:46.236991   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:46.237229   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:46.237464   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:46.237631   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:46.237769   55355 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa Username:docker}
	I0213 23:29:46.250621   55355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39607
	I0213 23:29:46.251073   55355 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:29:46.251811   55355 main.go:141] libmachine: Using API Version  1
	I0213 23:29:46.251855   55355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:29:46.252188   55355 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:29:46.252409   55355 main.go:141] libmachine: (auto-397221) Calling .GetState
	I0213 23:29:46.254822   55355 main.go:141] libmachine: (auto-397221) Calling .DriverName
	I0213 23:29:46.255108   55355 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0213 23:29:46.255122   55355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0213 23:29:46.255141   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHHostname
	I0213 23:29:46.258830   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:46.259235   55355 main.go:141] libmachine: (auto-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:4b:d3", ip: ""} in network mk-auto-397221: {Iface:virbr1 ExpiryTime:2024-02-14 00:28:57 +0000 UTC Type:0 Mac:52:54:00:e4:4b:d3 Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:auto-397221 Clientid:01:52:54:00:e4:4b:d3}
	I0213 23:29:46.259262   55355 main.go:141] libmachine: (auto-397221) DBG | domain auto-397221 has defined IP address 192.168.72.8 and MAC address 52:54:00:e4:4b:d3 in network mk-auto-397221
	I0213 23:29:46.259616   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHPort
	I0213 23:29:46.259860   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHKeyPath
	I0213 23:29:46.260020   55355 main.go:141] libmachine: (auto-397221) Calling .GetSSHUsername
	I0213 23:29:46.260190   55355 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/auto-397221/id_rsa Username:docker}
	I0213 23:29:46.562141   55355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0213 23:29:46.563893   55355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0213 23:29:46.568608   55355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0213 23:29:46.702924   55355 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-397221" context rescaled to 1 replicas
	I0213 23:29:46.702972   55355 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:29:46.704968   55355 out.go:177] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:08:21 UTC, ends at Tue 2024-02-13 23:29:49 UTC. --
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.027926917Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d10ec9f1-4781-4f9a-beec-1a9303023df4 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.029206283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=086f0ff6-6105-49ec-b25e-2339ecafd6a2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.029831809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866989029811010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=086f0ff6-6105-49ec-b25e-2339ecafd6a2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.030501854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c662ea88-74c8-4c02-add8-7e960ab01517 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.030565867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c662ea88-74c8-4c02-add8-7e960ab01517 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.030900975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c662ea88-74c8-4c02-add8-7e960ab01517 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.083004998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c885ac51-3c45-4bda-93cb-3595192d167c name=/runtime.v1.RuntimeService/Version
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.083104161Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c885ac51-3c45-4bda-93cb-3595192d167c name=/runtime.v1.RuntimeService/Version
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.084927060Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=df9f9cfa-5ba9-40f5-8901-241ea99be26c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.085569627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866989085549832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=df9f9cfa-5ba9-40f5-8901-241ea99be26c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.087208783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=45af1445-7a62-43a8-84b4-b6efaaae429e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.087420358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=45af1445-7a62-43a8-84b4-b6efaaae429e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.087659106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=45af1445-7a62-43a8-84b4-b6efaaae429e name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.155374004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ff4272c7-70b1-4b92-b974-072a4c32cace name=/runtime.v1.RuntimeService/Version
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.155490024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ff4272c7-70b1-4b92-b974-072a4c32cace name=/runtime.v1.RuntimeService/Version
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.157275887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9ead3fcc-1c7d-4769-b2b7-8501af9632a7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.157801762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866989157698851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9ead3fcc-1c7d-4769-b2b7-8501af9632a7 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.158517706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5fe3ac56-d8d9-4b9b-abc9-c17b58ddae41 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.158563982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5fe3ac56-d8d9-4b9b-abc9-c17b58ddae41 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.158833518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5fe3ac56-d8d9-4b9b-abc9-c17b58ddae41 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.175089715Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f8f56ca8-6faa-46a3-ad28-58d6771e819c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.175294421Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bae1a46ded81dd8efd774d74f380bc1b8e4a2dd9c33e05e865b71b4bf77e498b,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9vcz5,Uid:8df81e37-71b7-4220-9652-070538ce5a7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866031908309960,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9vcz5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df81e37-71b7-4220-9652-070538ce5a7f,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:13:51.572451059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1cdcb32e-024c-4055-b02f-807b7cc69b74,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866031838669120,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-13T23:13:51.503442090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&PodSandboxMetadata{Name:kube-proxy-4vgt5,Uid:456eb472-9014-4674-b03c-8e2a0997455b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866029516285795,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:13:48.873125769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-vrbjt,Ui
d:74c7f72d-10b1-467f-92ac-2888540bd3a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866029466215937,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:13:49.130674420Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-340656,Uid:fe9b7248f5e11d263240042b6cccb18a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866007582037664,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: fe9b7248f5e11d263240042b6cccb18a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fe9b7248f5e11d263240042b6cccb18a,kubernetes.io/config.seen: 2024-02-13T23:13:27.043611970Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-340656,Uid:65b418825c26a2b239b9b23b38957138,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866007577338551,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.56:8443,kubernetes.io/config.hash: 65b418825c26a2b239b9b23b38957138,kubernetes.io/config.seen: 2024-02-13T23:13:27.043610375Z,ku
bernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-340656,Uid:87fc2c43d84856cc722d882ffa68fd93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866007563365715,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fc2c43d84856cc722d882ffa68fd93,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 87fc2c43d84856cc722d882ffa68fd93,kubernetes.io/config.seen: 2024-02-13T23:13:27.043613387Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-340656,Uid:4efe22c69fab880a31247949f69305fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:
1707866007521560402,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.56:2379,kubernetes.io/config.hash: 4efe22c69fab880a31247949f69305fe,kubernetes.io/config.seen: 2024-02-13T23:13:27.043603525Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=f8f56ca8-6faa-46a3-ad28-58d6771e819c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.176585110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=90bf5568-09c6-49c9-bc4b-155dd8fc6560 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.176641215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=90bf5568-09c6-49c9-bc4b-155dd8fc6560 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:29:49 embed-certs-340656 crio[712]: time="2024-02-13 23:29:49.176950542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2,PodSandboxId:302a7260a315b99c64c14f627ac377d8418f3c4c091c1484fca322a24691dbd5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866032663207614,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdcb32e-024c-4055-b02f-807b7cc69b74,},Annotations:map[string]string{io.kubernetes.container.hash: 413cf8f3,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c,PodSandboxId:7e450494066d6d8541456ffc1ab5f4f4d68a610f48dd011c1753932f3d0ecf11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866032328795392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4vgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 456eb472-9014-4674-b03c-8e2a0997455b,},Annotations:map[string]string{io.kubernetes.container.hash: 197ab0bb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7,PodSandboxId:fd873d3b7e951f4404092d0e0327a7aa66e93df225855554ac9955d13ba62e78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866031158638770,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-vrbjt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c7f72d-10b1-467f-92ac-2888540bd3a5,},Annotations:map[string]string{io.kubernetes.container.hash: a17d255,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95,PodSandboxId:5a86bfa47c18385bc175b35bc91a2c45699a9900120f0ddb6d5c180b50ab6608,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866008732965537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 87fc2c43d84856cc722d882ffa68fd93,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e,PodSandboxId:0ab26698d4d9486b2b0f9108887b44a7f798745e741829c57121971376d689c4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866008626038553,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4efe22c69fab880a31247949f69305fe,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 8ed93afc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847,PodSandboxId:24a5970c34d86199b6f626e4cd0881656c9a134c2680aa095007215686779163,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866008112776582,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9b7248f5e11d26324004
2b6cccb18a,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a,PodSandboxId:a4526e1366aae8a634abb09119cf64d915213e50775c9c7801321e6252dbd52e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866008134692646,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-340656,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65b418825c26a2b239b9b23b38957138
,},Annotations:map[string]string{io.kubernetes.container.hash: 4fac5632,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=90bf5568-09c6-49c9-bc4b-155dd8fc6560 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e4f1dbcd4edc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   302a7260a315b       storage-provisioner
	92a991060a144       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   7e450494066d6       kube-proxy-4vgt5
	5f131d6441857       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   fd873d3b7e951       coredns-5dd5756b68-vrbjt
	404d20f685e67       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   5a86bfa47c183       kube-scheduler-embed-certs-340656
	fadcdf769480f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   0ab26698d4d94       etcd-embed-certs-340656
	746971c6f43b8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   a4526e1366aae       kube-apiserver-embed-certs-340656
	59007ae81d380       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   24a5970c34d86       kube-controller-manager-embed-certs-340656
	
	
	==> coredns [5f131d644185774e3171874ef7bca32d44917cd27ac29133e366b064b9584ae7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               embed-certs-340656
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-340656
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=embed-certs-340656
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_13_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-340656
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:29:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:29:15 +0000   Tue, 13 Feb 2024 23:13:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:29:15 +0000   Tue, 13 Feb 2024 23:13:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:29:15 +0000   Tue, 13 Feb 2024 23:13:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:29:15 +0000   Tue, 13 Feb 2024 23:13:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.56
	  Hostname:    embed-certs-340656
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d053c755c4e49a894725a6234e23a06
	  System UUID:                0d053c75-5c4e-49a8-9472-5a6234e23a06
	  Boot ID:                    abe2c3cc-6972-474c-bc98-db199fdff60d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vrbjt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-340656                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-340656             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-340656    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4vgt5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-340656             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-9vcz5               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-340656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-340656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-340656 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-340656 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-340656 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-340656 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node embed-certs-340656 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node embed-certs-340656 status is now: NodeReady
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-340656 event: Registered Node embed-certs-340656 in Controller
	
	
	==> dmesg <==
	[Feb13 23:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070138] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.479729] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.552448] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139457] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.532888] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.533340] systemd-fstab-generator[639]: Ignoring "noauto" for root device
	[  +0.107784] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.175273] systemd-fstab-generator[663]: Ignoring "noauto" for root device
	[  +0.127266] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +0.254494] systemd-fstab-generator[698]: Ignoring "noauto" for root device
	[ +17.822417] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[Feb13 23:09] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 23:13] systemd-fstab-generator[3471]: Ignoring "noauto" for root device
	[ +10.328581] systemd-fstab-generator[3793]: Ignoring "noauto" for root device
	[ +12.816645] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [fadcdf769480f1691413cffe553498d3e4ad506a49f7c071b6f11c1591c5cb6e] <==
	{"level":"info","ts":"2024-02-13T23:13:31.033057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.033065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 received MsgVoteResp from c137f0a735fac174 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.033077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c137f0a735fac174 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.033087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c137f0a735fac174 elected leader c137f0a735fac174 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:31.035196Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c137f0a735fac174","local-member-attributes":"{Name:embed-certs-340656 ClientURLs:[https://192.168.61.56:2379]}","request-path":"/0/members/c137f0a735fac174/attributes","cluster-id":"1232dcd2bbaf9bcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:13:31.03566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:31.036827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:31.037909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:13:31.038082Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:31.038145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:31.041251Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.56:2379"}
	{"level":"info","ts":"2024-02-13T23:13:31.041542Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:31.04558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1232dcd2bbaf9bcb","local-member-id":"c137f0a735fac174","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:31.045846Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:31.04612Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:23:31.338223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-02-13T23:23:31.341344Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.279471ms","hash":2223224541}
	{"level":"info","ts":"2024-02-13T23:23:31.341536Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2223224541,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-02-13T23:28:31.34723Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-02-13T23:28:31.350176Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":957,"took":"2.349109ms","hash":1169982680}
	{"level":"info","ts":"2024-02-13T23:28:31.350269Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1169982680,"revision":957,"compact-revision":714}
	{"level":"info","ts":"2024-02-13T23:28:35.273668Z","caller":"traceutil/trace.go:171","msg":"trace[927668246] transaction","detail":"{read_only:false; response_revision:1204; number_of_response:1; }","duration":"204.560183ms","start":"2024-02-13T23:28:35.069044Z","end":"2024-02-13T23:28:35.273604Z","steps":["trace[927668246] 'process raft request'  (duration: 142.253133ms)","trace[927668246] 'compare'  (duration: 62.057967ms)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T23:28:35.5077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.774391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T23:28:35.508286Z","caller":"traceutil/trace.go:171","msg":"trace[860861300] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1204; }","duration":"117.446352ms","start":"2024-02-13T23:28:35.390809Z","end":"2024-02-13T23:28:35.508255Z","steps":["trace[860861300] 'range keys from in-memory index tree'  (duration: 116.668223ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:29:18.854398Z","caller":"traceutil/trace.go:171","msg":"trace[1156418103] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"111.255323ms","start":"2024-02-13T23:29:18.743098Z","end":"2024-02-13T23:29:18.854354Z","steps":["trace[1156418103] 'process raft request'  (duration: 111.109217ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:29:49 up 21 min,  0 users,  load average: 0.49, 0.28, 0.21
	Linux embed-certs-340656 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [746971c6f43b86587cd812f7c87871e29a509be2d37ae35228f87f5974ac0d9a] <==
	I0213 23:26:34.068640       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:26:34.070642       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:34.070688       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:26:34.070697       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:27:32.949848       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 23:28:32.950293       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:28:33.072312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:33.072428       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:28:33.072998       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:28:34.073356       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:34.073442       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:28:34.073466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:28:34.073530       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:34.073605       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:28:34.075100       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:29:32.952642       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:29:34.073789       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:29:34.074023       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:29:34.074077       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:29:34.076114       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:29:34.076226       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:29:34.076242       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [59007ae81d380d2db75f68e9b35bd23ad280d1a4a66ea718e36f5fb65ec32847] <==
	I0213 23:24:18.742832       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:48.185620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:48.194971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="340.192µs"
	I0213 23:24:48.751812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0213 23:25:03.195399       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="88.097µs"
	E0213 23:25:18.191545       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:18.761600       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:25:48.197831       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:48.771812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:18.204292       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:18.785139       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:48.211095       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:48.794376       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:18.217049       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:18.804133       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:48.225844       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:48.813445       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:28:18.232094       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:28:18.822426       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:28:48.240419       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:28:48.833533       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:29:18.247093       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:29:18.843682       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:29:48.256685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:29:48.863290       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [92a991060a1445c2de175d6adeeea37f45d6f5546577ea729019d96e3b13d10c] <==
	I0213 23:13:53.044464       1 server_others.go:69] "Using iptables proxy"
	I0213 23:13:53.062024       1 node.go:141] Successfully retrieved node IP: 192.168.61.56
	I0213 23:13:53.122899       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 23:13:53.122964       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 23:13:53.129146       1 server_others.go:152] "Using iptables Proxier"
	I0213 23:13:53.129635       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:13:53.130047       1 server.go:846] "Version info" version="v1.28.4"
	I0213 23:13:53.130156       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:13:53.132294       1 config.go:188] "Starting service config controller"
	I0213 23:13:53.132512       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:13:53.132960       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:13:53.133115       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:13:53.134347       1 config.go:315] "Starting node config controller"
	I0213 23:13:53.134399       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:13:53.233993       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:13:53.234291       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:13:53.235212       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [404d20f685e67d40d820b5f7ad44a4a0eef92e59abe95e2606ed334ac1582b95] <==
	W0213 23:13:34.011889       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:34.011920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:34.116429       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:13:34.116675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:13:34.287080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:13:34.287138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:13:34.290480       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:13:34.290509       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:13:34.298035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:34.298089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:34.326576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:13:34.326679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 23:13:34.393220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:13:34.393453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:13:34.445063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:13:34.445194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:13:34.469018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:34.469338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:34.508641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:13:34.508839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:13:34.511142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 23:13:34.511283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:13:34.544122       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:13:34.544474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0213 23:13:36.195413       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:08:21 UTC, ends at Tue 2024-02-13 23:29:49 UTC. --
	Feb 13 23:27:15 embed-certs-340656 kubelet[3800]: E0213 23:27:15.175221    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:27:26 embed-certs-340656 kubelet[3800]: E0213 23:27:26.175204    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:27:37 embed-certs-340656 kubelet[3800]: E0213 23:27:37.175405    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:27:37 embed-certs-340656 kubelet[3800]: E0213 23:27:37.304177    3800 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:27:37 embed-certs-340656 kubelet[3800]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:27:37 embed-certs-340656 kubelet[3800]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:27:37 embed-certs-340656 kubelet[3800]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:27:50 embed-certs-340656 kubelet[3800]: E0213 23:27:50.173994    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:28:04 embed-certs-340656 kubelet[3800]: E0213 23:28:04.174478    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:28:20 embed-certs-340656 kubelet[3800]: E0213 23:28:20.174977    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:28:32 embed-certs-340656 kubelet[3800]: E0213 23:28:32.175198    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:28:37 embed-certs-340656 kubelet[3800]: E0213 23:28:37.306115    3800 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:28:37 embed-certs-340656 kubelet[3800]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:28:37 embed-certs-340656 kubelet[3800]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:28:37 embed-certs-340656 kubelet[3800]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:28:37 embed-certs-340656 kubelet[3800]: E0213 23:28:37.421679    3800 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Feb 13 23:28:44 embed-certs-340656 kubelet[3800]: E0213 23:28:44.174620    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:28:55 embed-certs-340656 kubelet[3800]: E0213 23:28:55.176133    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:29:10 embed-certs-340656 kubelet[3800]: E0213 23:29:10.174889    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:29:25 embed-certs-340656 kubelet[3800]: E0213 23:29:25.175290    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:29:37 embed-certs-340656 kubelet[3800]: E0213 23:29:37.176106    3800 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9vcz5" podUID="8df81e37-71b7-4220-9652-070538ce5a7f"
	Feb 13 23:29:37 embed-certs-340656 kubelet[3800]: E0213 23:29:37.309043    3800 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:29:37 embed-certs-340656 kubelet[3800]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:29:37 embed-certs-340656 kubelet[3800]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:29:37 embed-certs-340656 kubelet[3800]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9e4f1dbcd4edc4e8c509af497703e5b746ca87b423e493834d5bcc2b4f3eb7c2] <==
	I0213 23:13:52.920419       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:13:52.969691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:13:52.970011       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:13:52.991541       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:13:52.992507       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-340656_88f9f32e-e538-4fc2-8364-6c8f32ca6876!
	I0213 23:13:52.996145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6771fed6-6360-43c6-8cc5-5fae0fde2cc2", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-340656_88f9f32e-e538-4fc2-8364-6c8f32ca6876 became leader
	I0213 23:13:53.093475       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-340656_88f9f32e-e538-4fc2-8364-6c8f32ca6876!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-340656 -n embed-certs-340656
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-340656 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9vcz5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-340656 describe pod metrics-server-57f55c9bc5-9vcz5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-340656 describe pod metrics-server-57f55c9bc5-9vcz5: exit status 1 (70.611724ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9vcz5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-340656 describe pod metrics-server-57f55c9bc5-9vcz5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (160.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (74.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778731 -n no-preload-778731
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:28:36.417721882 +0000 UTC m=+5536.032495795
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-778731 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-778731 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.526µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-778731 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-778731 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-778731 logs -n 25: (1.814995885s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-998671                                        | pause-998671                 | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 22:59 UTC |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 22:59 UTC | 13 Feb 24 23:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-245122        | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-675174                              | cert-expiration-675174       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	| stop    | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-755510 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | disable-driver-mounts-755510                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:02 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-778731             | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC | 13 Feb 24 23:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-340656            | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC | 13 Feb 24 23:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:01 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-083863  | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC | 13 Feb 24 23:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:02 UTC |                     |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:28 UTC |
	| start   | -p newest-cni-120411 --memory=2200 --alsologtostderr   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:28:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:28:02.068263   54882 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:28:02.068396   54882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:28:02.068403   54882 out.go:304] Setting ErrFile to fd 2...
	I0213 23:28:02.068408   54882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:28:02.068609   54882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:28:02.069211   54882 out.go:298] Setting JSON to false
	I0213 23:28:02.070210   54882 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7833,"bootTime":1707859049,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:28:02.070276   54882 start.go:138] virtualization: kvm guest
	I0213 23:28:02.072782   54882 out.go:177] * [newest-cni-120411] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:28:02.074388   54882 notify.go:220] Checking for updates...
	I0213 23:28:02.074400   54882 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:28:02.075769   54882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:28:02.077079   54882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:28:02.078391   54882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:28:02.079717   54882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:28:02.080973   54882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:28:02.082850   54882 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:28:02.082985   54882 config.go:182] Loaded profile config "embed-certs-340656": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:28:02.083095   54882 config.go:182] Loaded profile config "no-preload-778731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:28:02.083202   54882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:28:02.121228   54882 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 23:28:02.122418   54882 start.go:298] selected driver: kvm2
	I0213 23:28:02.122439   54882 start.go:902] validating driver "kvm2" against <nil>
	I0213 23:28:02.122461   54882 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:28:02.123194   54882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:28:02.123269   54882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:28:02.138001   54882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:28:02.138071   54882 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0213 23:28:02.138110   54882 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0213 23:28:02.138351   54882 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0213 23:28:02.138422   54882 cni.go:84] Creating CNI manager for ""
	I0213 23:28:02.138446   54882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:28:02.138461   54882 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 23:28:02.138474   54882 start_flags.go:321] config:
	{Name:newest-cni-120411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:28:02.138865   54882 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:28:02.140400   54882 out.go:177] * Starting control plane node newest-cni-120411 in cluster newest-cni-120411
	I0213 23:28:02.141661   54882 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:28:02.141702   54882 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0213 23:28:02.141722   54882 cache.go:56] Caching tarball of preloaded images
	I0213 23:28:02.141808   54882 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:28:02.141822   54882 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0213 23:28:02.141953   54882 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/config.json ...
	I0213 23:28:02.141978   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/config.json: {Name:mk89aba5d3c0bc7bca13e88c1a7276d83f2ba9e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:02.142132   54882 start.go:365] acquiring machines lock for newest-cni-120411: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:28:02.142173   54882 start.go:369] acquired machines lock for "newest-cni-120411" in 21.937µs
	I0213 23:28:02.142195   54882 start.go:93] Provisioning new machine with config: &{Name:newest-cni-120411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:28:02.142273   54882 start.go:125] createHost starting for "" (driver="kvm2")
	I0213 23:28:02.144663   54882 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0213 23:28:02.144823   54882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:28:02.144872   54882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:28:02.160355   54882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34549
	I0213 23:28:02.160834   54882 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:28:02.161405   54882 main.go:141] libmachine: Using API Version  1
	I0213 23:28:02.161429   54882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:28:02.161776   54882 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:28:02.162017   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:28:02.162170   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:02.162329   54882 start.go:159] libmachine.API.Create for "newest-cni-120411" (driver="kvm2")
	I0213 23:28:02.162353   54882 client.go:168] LocalClient.Create starting
	I0213 23:28:02.162386   54882 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem
	I0213 23:28:02.162421   54882 main.go:141] libmachine: Decoding PEM data...
	I0213 23:28:02.162438   54882 main.go:141] libmachine: Parsing certificate...
	I0213 23:28:02.162485   54882 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem
	I0213 23:28:02.162503   54882 main.go:141] libmachine: Decoding PEM data...
	I0213 23:28:02.162514   54882 main.go:141] libmachine: Parsing certificate...
	I0213 23:28:02.162536   54882 main.go:141] libmachine: Running pre-create checks...
	I0213 23:28:02.162547   54882 main.go:141] libmachine: (newest-cni-120411) Calling .PreCreateCheck
	I0213 23:28:02.162942   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetConfigRaw
	I0213 23:28:02.163331   54882 main.go:141] libmachine: Creating machine...
	I0213 23:28:02.163343   54882 main.go:141] libmachine: (newest-cni-120411) Calling .Create
	I0213 23:28:02.163494   54882 main.go:141] libmachine: (newest-cni-120411) Creating KVM machine...
	I0213 23:28:02.164947   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found existing default KVM network
	I0213 23:28:02.166468   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:02.166289   54905 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:23:2b} reservation:<nil>}
	I0213 23:28:02.167679   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:02.167595   54905 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ac7a0}
	I0213 23:28:02.173945   54882 main.go:141] libmachine: (newest-cni-120411) DBG | trying to create private KVM network mk-newest-cni-120411 192.168.50.0/24...
	I0213 23:28:02.255124   54882 main.go:141] libmachine: (newest-cni-120411) DBG | private KVM network mk-newest-cni-120411 192.168.50.0/24 created
	I0213 23:28:02.255207   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:02.255073   54905 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:28:02.255227   54882 main.go:141] libmachine: (newest-cni-120411) Setting up store path in /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411 ...
	I0213 23:28:02.255241   54882 main.go:141] libmachine: (newest-cni-120411) Building disk image from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 23:28:02.255261   54882 main.go:141] libmachine: (newest-cni-120411) Downloading /home/jenkins/minikube-integration/18171-8990/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0213 23:28:02.456466   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:02.456325   54905 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa...
	I0213 23:28:02.799808   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:02.799665   54905 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/newest-cni-120411.rawdisk...
	I0213 23:28:02.799840   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Writing magic tar header
	I0213 23:28:02.799890   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Writing SSH key tar header
	I0213 23:28:02.799915   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:02.799831   54905 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411 ...
	I0213 23:28:02.800023   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411
	I0213 23:28:02.800051   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines
	I0213 23:28:02.800070   54882 main.go:141] libmachine: (newest-cni-120411) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411 (perms=drwx------)
	I0213 23:28:02.800093   54882 main.go:141] libmachine: (newest-cni-120411) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines (perms=drwxr-xr-x)
	I0213 23:28:02.800108   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:28:02.800122   54882 main.go:141] libmachine: (newest-cni-120411) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube (perms=drwxr-xr-x)
	I0213 23:28:02.800135   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990
	I0213 23:28:02.800152   54882 main.go:141] libmachine: (newest-cni-120411) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990 (perms=drwxrwxr-x)
	I0213 23:28:02.800163   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0213 23:28:02.800181   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home/jenkins
	I0213 23:28:02.800195   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Checking permissions on dir: /home
	I0213 23:28:02.800211   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Skipping /home - not owner
	I0213 23:28:02.800225   54882 main.go:141] libmachine: (newest-cni-120411) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0213 23:28:02.800237   54882 main.go:141] libmachine: (newest-cni-120411) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0213 23:28:02.800252   54882 main.go:141] libmachine: (newest-cni-120411) Creating domain...
	I0213 23:28:02.801421   54882 main.go:141] libmachine: (newest-cni-120411) define libvirt domain using xml: 
	I0213 23:28:02.801454   54882 main.go:141] libmachine: (newest-cni-120411) <domain type='kvm'>
	I0213 23:28:02.801470   54882 main.go:141] libmachine: (newest-cni-120411)   <name>newest-cni-120411</name>
	I0213 23:28:02.801479   54882 main.go:141] libmachine: (newest-cni-120411)   <memory unit='MiB'>2200</memory>
	I0213 23:28:02.801499   54882 main.go:141] libmachine: (newest-cni-120411)   <vcpu>2</vcpu>
	I0213 23:28:02.801514   54882 main.go:141] libmachine: (newest-cni-120411)   <features>
	I0213 23:28:02.801527   54882 main.go:141] libmachine: (newest-cni-120411)     <acpi/>
	I0213 23:28:02.801541   54882 main.go:141] libmachine: (newest-cni-120411)     <apic/>
	I0213 23:28:02.801576   54882 main.go:141] libmachine: (newest-cni-120411)     <pae/>
	I0213 23:28:02.801605   54882 main.go:141] libmachine: (newest-cni-120411)     
	I0213 23:28:02.801620   54882 main.go:141] libmachine: (newest-cni-120411)   </features>
	I0213 23:28:02.801632   54882 main.go:141] libmachine: (newest-cni-120411)   <cpu mode='host-passthrough'>
	I0213 23:28:02.801646   54882 main.go:141] libmachine: (newest-cni-120411)   
	I0213 23:28:02.801653   54882 main.go:141] libmachine: (newest-cni-120411)   </cpu>
	I0213 23:28:02.801679   54882 main.go:141] libmachine: (newest-cni-120411)   <os>
	I0213 23:28:02.801704   54882 main.go:141] libmachine: (newest-cni-120411)     <type>hvm</type>
	I0213 23:28:02.801719   54882 main.go:141] libmachine: (newest-cni-120411)     <boot dev='cdrom'/>
	I0213 23:28:02.801730   54882 main.go:141] libmachine: (newest-cni-120411)     <boot dev='hd'/>
	I0213 23:28:02.801743   54882 main.go:141] libmachine: (newest-cni-120411)     <bootmenu enable='no'/>
	I0213 23:28:02.801754   54882 main.go:141] libmachine: (newest-cni-120411)   </os>
	I0213 23:28:02.801794   54882 main.go:141] libmachine: (newest-cni-120411)   <devices>
	I0213 23:28:02.801823   54882 main.go:141] libmachine: (newest-cni-120411)     <disk type='file' device='cdrom'>
	I0213 23:28:02.801855   54882 main.go:141] libmachine: (newest-cni-120411)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/boot2docker.iso'/>
	I0213 23:28:02.801898   54882 main.go:141] libmachine: (newest-cni-120411)       <target dev='hdc' bus='scsi'/>
	I0213 23:28:02.801916   54882 main.go:141] libmachine: (newest-cni-120411)       <readonly/>
	I0213 23:28:02.801930   54882 main.go:141] libmachine: (newest-cni-120411)     </disk>
	I0213 23:28:02.801956   54882 main.go:141] libmachine: (newest-cni-120411)     <disk type='file' device='disk'>
	I0213 23:28:02.801979   54882 main.go:141] libmachine: (newest-cni-120411)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0213 23:28:02.802003   54882 main.go:141] libmachine: (newest-cni-120411)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/newest-cni-120411.rawdisk'/>
	I0213 23:28:02.802023   54882 main.go:141] libmachine: (newest-cni-120411)       <target dev='hda' bus='virtio'/>
	I0213 23:28:02.802037   54882 main.go:141] libmachine: (newest-cni-120411)     </disk>
	I0213 23:28:02.802051   54882 main.go:141] libmachine: (newest-cni-120411)     <interface type='network'>
	I0213 23:28:02.802081   54882 main.go:141] libmachine: (newest-cni-120411)       <source network='mk-newest-cni-120411'/>
	I0213 23:28:02.802098   54882 main.go:141] libmachine: (newest-cni-120411)       <model type='virtio'/>
	I0213 23:28:02.802114   54882 main.go:141] libmachine: (newest-cni-120411)     </interface>
	I0213 23:28:02.802131   54882 main.go:141] libmachine: (newest-cni-120411)     <interface type='network'>
	I0213 23:28:02.802146   54882 main.go:141] libmachine: (newest-cni-120411)       <source network='default'/>
	I0213 23:28:02.802159   54882 main.go:141] libmachine: (newest-cni-120411)       <model type='virtio'/>
	I0213 23:28:02.802174   54882 main.go:141] libmachine: (newest-cni-120411)     </interface>
	I0213 23:28:02.802187   54882 main.go:141] libmachine: (newest-cni-120411)     <serial type='pty'>
	I0213 23:28:02.802197   54882 main.go:141] libmachine: (newest-cni-120411)       <target port='0'/>
	I0213 23:28:02.802213   54882 main.go:141] libmachine: (newest-cni-120411)     </serial>
	I0213 23:28:02.802227   54882 main.go:141] libmachine: (newest-cni-120411)     <console type='pty'>
	I0213 23:28:02.802241   54882 main.go:141] libmachine: (newest-cni-120411)       <target type='serial' port='0'/>
	I0213 23:28:02.802287   54882 main.go:141] libmachine: (newest-cni-120411)     </console>
	I0213 23:28:02.802325   54882 main.go:141] libmachine: (newest-cni-120411)     <rng model='virtio'>
	I0213 23:28:02.802339   54882 main.go:141] libmachine: (newest-cni-120411)       <backend model='random'>/dev/random</backend>
	I0213 23:28:02.802348   54882 main.go:141] libmachine: (newest-cni-120411)     </rng>
	I0213 23:28:02.802360   54882 main.go:141] libmachine: (newest-cni-120411)     
	I0213 23:28:02.802375   54882 main.go:141] libmachine: (newest-cni-120411)     
	I0213 23:28:02.802385   54882 main.go:141] libmachine: (newest-cni-120411)   </devices>
	I0213 23:28:02.802404   54882 main.go:141] libmachine: (newest-cni-120411) </domain>
	I0213 23:28:02.802421   54882 main.go:141] libmachine: (newest-cni-120411) 
	I0213 23:28:02.807026   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:8f:c0:61 in network default
	I0213 23:28:02.807642   54882 main.go:141] libmachine: (newest-cni-120411) Ensuring networks are active...
	I0213 23:28:02.807667   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:02.808322   54882 main.go:141] libmachine: (newest-cni-120411) Ensuring network default is active
	I0213 23:28:02.808659   54882 main.go:141] libmachine: (newest-cni-120411) Ensuring network mk-newest-cni-120411 is active
	I0213 23:28:02.809197   54882 main.go:141] libmachine: (newest-cni-120411) Getting domain xml...
	I0213 23:28:02.810070   54882 main.go:141] libmachine: (newest-cni-120411) Creating domain...
	I0213 23:28:04.083532   54882 main.go:141] libmachine: (newest-cni-120411) Waiting to get IP...
	I0213 23:28:04.084450   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:04.085049   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:04.085074   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:04.085017   54905 retry.go:31] will retry after 220.441437ms: waiting for machine to come up
	I0213 23:28:04.307540   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:04.308082   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:04.308142   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:04.308023   54905 retry.go:31] will retry after 286.67957ms: waiting for machine to come up
	I0213 23:28:04.596616   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:04.597092   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:04.597125   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:04.597014   54905 retry.go:31] will retry after 385.136855ms: waiting for machine to come up
	I0213 23:28:04.983522   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:04.984082   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:04.984112   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:04.984029   54905 retry.go:31] will retry after 431.450976ms: waiting for machine to come up
	I0213 23:28:05.416700   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:05.417140   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:05.417170   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:05.417101   54905 retry.go:31] will retry after 696.227774ms: waiting for machine to come up
	I0213 23:28:06.114524   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:06.114902   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:06.114946   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:06.114849   54905 retry.go:31] will retry after 779.147773ms: waiting for machine to come up
	I0213 23:28:06.895760   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:06.896318   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:06.896349   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:06.896261   54905 retry.go:31] will retry after 850.18014ms: waiting for machine to come up
	I0213 23:28:07.748200   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:07.748741   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:07.748785   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:07.748674   54905 retry.go:31] will retry after 1.320679049s: waiting for machine to come up
	I0213 23:28:09.070908   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:09.071409   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:09.071439   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:09.071353   54905 retry.go:31] will retry after 1.357788589s: waiting for machine to come up
	I0213 23:28:10.431024   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:10.431429   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:10.431458   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:10.431391   54905 retry.go:31] will retry after 2.284468235s: waiting for machine to come up
	I0213 23:28:12.718021   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:12.718492   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:12.718515   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:12.718447   54905 retry.go:31] will retry after 2.847949091s: waiting for machine to come up
	I0213 23:28:15.568035   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:15.568518   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:15.568547   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:15.568476   54905 retry.go:31] will retry after 3.354640718s: waiting for machine to come up
	I0213 23:28:18.924425   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:18.924938   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:18.924969   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:18.924886   54905 retry.go:31] will retry after 4.084147228s: waiting for machine to come up
	I0213 23:28:23.010281   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:23.010725   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find current IP address of domain newest-cni-120411 in network mk-newest-cni-120411
	I0213 23:28:23.010748   54882 main.go:141] libmachine: (newest-cni-120411) DBG | I0213 23:28:23.010666   54905 retry.go:31] will retry after 3.992803872s: waiting for machine to come up
	I0213 23:28:27.007049   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.007610   54882 main.go:141] libmachine: (newest-cni-120411) Found IP for machine: 192.168.50.143
	I0213 23:28:27.007659   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has current primary IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.007675   54882 main.go:141] libmachine: (newest-cni-120411) Reserving static IP address...
	I0213 23:28:27.008060   54882 main.go:141] libmachine: (newest-cni-120411) DBG | unable to find host DHCP lease matching {name: "newest-cni-120411", mac: "52:54:00:e5:49:c2", ip: "192.168.50.143"} in network mk-newest-cni-120411
	I0213 23:28:27.094304   54882 main.go:141] libmachine: (newest-cni-120411) Reserved static IP address: 192.168.50.143
	I0213 23:28:27.094331   54882 main.go:141] libmachine: (newest-cni-120411) Waiting for SSH to be available...
	I0213 23:28:27.094343   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Getting to WaitForSSH function...
	I0213 23:28:27.097310   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.097751   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.097782   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.097962   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Using SSH client type: external
	I0213 23:28:27.098005   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa (-rw-------)
	I0213 23:28:27.098036   54882 main.go:141] libmachine: (newest-cni-120411) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:28:27.098051   54882 main.go:141] libmachine: (newest-cni-120411) DBG | About to run SSH command:
	I0213 23:28:27.098086   54882 main.go:141] libmachine: (newest-cni-120411) DBG | exit 0
	I0213 23:28:27.185965   54882 main.go:141] libmachine: (newest-cni-120411) DBG | SSH cmd err, output: <nil>: 
	I0213 23:28:27.186244   54882 main.go:141] libmachine: (newest-cni-120411) KVM machine creation complete!
	I0213 23:28:27.186622   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetConfigRaw
	I0213 23:28:27.187243   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:27.187437   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:27.187614   54882 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 23:28:27.187632   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetState
	I0213 23:28:27.188874   54882 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 23:28:27.188890   54882 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 23:28:27.188898   54882 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 23:28:27.188908   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:27.191318   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.191766   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.191798   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.191929   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:27.192112   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.192281   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.192415   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:27.192590   54882 main.go:141] libmachine: Using SSH client type: native
	I0213 23:28:27.193003   54882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:28:27.193017   54882 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 23:28:27.309437   54882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:28:27.309466   54882 main.go:141] libmachine: Detecting the provisioner...
	I0213 23:28:27.309478   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:27.312575   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.312936   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.312966   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.313131   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:27.313363   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.313529   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.313692   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:27.313892   54882 main.go:141] libmachine: Using SSH client type: native
	I0213 23:28:27.314226   54882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:28:27.314239   54882 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 23:28:27.426951   54882 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 23:28:27.427046   54882 main.go:141] libmachine: found compatible host: buildroot
	I0213 23:28:27.427063   54882 main.go:141] libmachine: Provisioning with buildroot...
	I0213 23:28:27.427078   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:28:27.427340   54882 buildroot.go:166] provisioning hostname "newest-cni-120411"
	I0213 23:28:27.427366   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:28:27.427535   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:27.430310   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.430741   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.430773   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.430919   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:27.431089   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.431234   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.431336   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:27.431513   54882 main.go:141] libmachine: Using SSH client type: native
	I0213 23:28:27.431934   54882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:28:27.431960   54882 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-120411 && echo "newest-cni-120411" | sudo tee /etc/hostname
	I0213 23:28:27.560940   54882 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-120411
	
	I0213 23:28:27.560992   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:27.564139   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.564518   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.564556   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.564821   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:27.565038   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.565183   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.565325   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:27.565544   54882 main.go:141] libmachine: Using SSH client type: native
	I0213 23:28:27.565978   54882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:28:27.566020   54882 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-120411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-120411/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-120411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:28:27.692683   54882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:28:27.692722   54882 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:28:27.692778   54882 buildroot.go:174] setting up certificates
	I0213 23:28:27.692794   54882 provision.go:83] configureAuth start
	I0213 23:28:27.692811   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetMachineName
	I0213 23:28:27.693148   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:28:27.695769   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.696160   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.696198   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.696382   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:27.698913   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.699369   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.699396   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.699562   54882 provision.go:138] copyHostCerts
	I0213 23:28:27.699628   54882 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:28:27.699656   54882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:28:27.699760   54882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:28:27.699874   54882 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:28:27.699885   54882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:28:27.699925   54882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:28:27.700018   54882 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:28:27.700030   54882 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:28:27.700067   54882 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:28:27.700135   54882 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.newest-cni-120411 san=[192.168.50.143 192.168.50.143 localhost 127.0.0.1 minikube newest-cni-120411]
	I0213 23:28:27.879981   54882 provision.go:172] copyRemoteCerts
	I0213 23:28:27.880041   54882 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:28:27.880063   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:27.882728   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.883083   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:27.883116   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:27.883255   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:27.883398   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:27.883569   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:27.883749   54882 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:28:27.972565   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:28:27.999392   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0213 23:28:28.027332   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0213 23:28:28.051336   54882 provision.go:86] duration metric: configureAuth took 358.525148ms
	I0213 23:28:28.051369   54882 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:28:28.051571   54882 config.go:182] Loaded profile config "newest-cni-120411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0213 23:28:28.051657   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:28.054501   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.054838   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.054883   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.055034   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:28.055239   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.055427   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.055530   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:28.055739   54882 main.go:141] libmachine: Using SSH client type: native
	I0213 23:28:28.056053   54882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:28:28.056076   54882 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:28:28.377235   54882 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:28:28.377268   54882 main.go:141] libmachine: Checking connection to Docker...
	I0213 23:28:28.377281   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetURL
	I0213 23:28:28.378708   54882 main.go:141] libmachine: (newest-cni-120411) DBG | Using libvirt version 6000000
	I0213 23:28:28.380951   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.381369   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.381421   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.381587   54882 main.go:141] libmachine: Docker is up and running!
	I0213 23:28:28.381610   54882 main.go:141] libmachine: Reticulating splines...
	I0213 23:28:28.381619   54882 client.go:171] LocalClient.Create took 26.219255581s
	I0213 23:28:28.381648   54882 start.go:167] duration metric: libmachine.API.Create for "newest-cni-120411" took 26.219318311s
	I0213 23:28:28.381665   54882 start.go:300] post-start starting for "newest-cni-120411" (driver="kvm2")
	I0213 23:28:28.381685   54882 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:28:28.381707   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:28.381965   54882 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:28:28.382010   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:28.384452   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.384816   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.384841   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.385069   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:28.385251   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.385409   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:28.385534   54882 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:28:28.471815   54882 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:28:28.476603   54882 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:28:28.476635   54882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:28:28.476708   54882 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:28:28.476809   54882 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:28:28.476968   54882 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:28:28.486275   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:28:28.511288   54882 start.go:303] post-start completed in 129.605087ms
	I0213 23:28:28.511344   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetConfigRaw
	I0213 23:28:28.512019   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:28:28.514679   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.515027   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.515064   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.515376   54882 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/config.json ...
	I0213 23:28:28.515595   54882 start.go:128] duration metric: createHost completed in 26.373311074s
	I0213 23:28:28.515625   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:28.518076   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.518463   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.518491   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.518637   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:28.518849   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.519024   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.519172   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:28.519336   54882 main.go:141] libmachine: Using SSH client type: native
	I0213 23:28:28.519715   54882 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.50.143 22 <nil> <nil>}
	I0213 23:28:28.519728   54882 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:28:28.634987   54882 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707866908.617434952
	
	I0213 23:28:28.635009   54882 fix.go:206] guest clock: 1707866908.617434952
	I0213 23:28:28.635018   54882 fix.go:219] Guest: 2024-02-13 23:28:28.617434952 +0000 UTC Remote: 2024-02-13 23:28:28.515609575 +0000 UTC m=+26.497574968 (delta=101.825377ms)
	I0213 23:28:28.635061   54882 fix.go:190] guest clock delta is within tolerance: 101.825377ms
	I0213 23:28:28.635067   54882 start.go:83] releasing machines lock for "newest-cni-120411", held for 26.492886431s
	I0213 23:28:28.635106   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:28.635403   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:28:28.638022   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.638414   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.638447   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.638586   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:28.639062   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:28.639238   54882 main.go:141] libmachine: (newest-cni-120411) Calling .DriverName
	I0213 23:28:28.639352   54882 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:28:28.639387   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:28.639458   54882 ssh_runner.go:195] Run: cat /version.json
	I0213 23:28:28.639488   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHHostname
	I0213 23:28:28.642140   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.642363   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.642524   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.642552   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.642693   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:28.642715   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:28.642758   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:28.642947   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHPort
	I0213 23:28:28.642957   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.643134   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHKeyPath
	I0213 23:28:28.643142   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:28.643299   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetSSHUsername
	I0213 23:28:28.643295   54882 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:28:28.643434   54882 sshutil.go:53] new ssh client: &{IP:192.168.50.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/newest-cni-120411/id_rsa Username:docker}
	I0213 23:28:28.731222   54882 ssh_runner.go:195] Run: systemctl --version
	I0213 23:28:28.756170   54882 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:28:28.920925   54882 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:28:28.927324   54882 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:28:28.927385   54882 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:28:28.942847   54882 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:28:28.942884   54882 start.go:475] detecting cgroup driver to use...
	I0213 23:28:28.942954   54882 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:28:28.957828   54882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:28:28.972423   54882 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:28:28.972485   54882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:28:28.986036   54882 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:28:28.999819   54882 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:28:29.115051   54882 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:28:29.243886   54882 docker.go:233] disabling docker service ...
	I0213 23:28:29.243965   54882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:28:29.260900   54882 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:28:29.276996   54882 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:28:29.408718   54882 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:28:29.544630   54882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:28:29.559970   54882 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:28:29.579545   54882 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:28:29.579610   54882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:28:29.591781   54882 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:28:29.591862   54882 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:28:29.602939   54882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:28:29.612936   54882 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:28:29.622654   54882 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:28:29.633711   54882 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:28:29.642947   54882 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:28:29.643020   54882 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:28:29.657803   54882 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:28:29.667391   54882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:28:29.796280   54882 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:28:30.000187   54882 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:28:30.000269   54882 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:28:30.006492   54882 start.go:543] Will wait 60s for crictl version
	I0213 23:28:30.006552   54882 ssh_runner.go:195] Run: which crictl
	I0213 23:28:30.011894   54882 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:28:30.056905   54882 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:28:30.056996   54882 ssh_runner.go:195] Run: crio --version
	I0213 23:28:30.118494   54882 ssh_runner.go:195] Run: crio --version
	I0213 23:28:30.174009   54882 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0213 23:28:30.175349   54882 main.go:141] libmachine: (newest-cni-120411) Calling .GetIP
	I0213 23:28:30.178232   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:30.178590   54882 main.go:141] libmachine: (newest-cni-120411) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:49:c2", ip: ""} in network mk-newest-cni-120411: {Iface:virbr4 ExpiryTime:2024-02-14 00:28:18 +0000 UTC Type:0 Mac:52:54:00:e5:49:c2 Iaid: IPaddr:192.168.50.143 Prefix:24 Hostname:newest-cni-120411 Clientid:01:52:54:00:e5:49:c2}
	I0213 23:28:30.178613   54882 main.go:141] libmachine: (newest-cni-120411) DBG | domain newest-cni-120411 has defined IP address 192.168.50.143 and MAC address 52:54:00:e5:49:c2 in network mk-newest-cni-120411
	I0213 23:28:30.178790   54882 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0213 23:28:30.183502   54882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:28:30.197789   54882 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0213 23:28:30.199159   54882 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 23:28:30.199224   54882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:28:30.240198   54882 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0213 23:28:30.240260   54882 ssh_runner.go:195] Run: which lz4
	I0213 23:28:30.245056   54882 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:28:30.250070   54882 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:28:30.250108   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0213 23:28:31.936440   54882 crio.go:444] Took 1.691418 seconds to copy over tarball
	I0213 23:28:31.936540   54882 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:28:34.787654   54882 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.851068959s)
	I0213 23:28:34.787683   54882 crio.go:451] Took 2.851217 seconds to extract the tarball
	I0213 23:28:34.787693   54882 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:28:34.829164   54882 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:28:34.929018   54882 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:28:34.929060   54882 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:28:34.929153   54882 ssh_runner.go:195] Run: crio config
	I0213 23:28:34.987394   54882 cni.go:84] Creating CNI manager for ""
	I0213 23:28:34.987417   54882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 23:28:34.987435   54882 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0213 23:28:34.987461   54882 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.143 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-120411 NodeName:newest-cni-120411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.50.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:28:34.987616   54882 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-120411"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:28:34.987690   54882 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-120411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0213 23:28:34.987748   54882 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0213 23:28:34.997834   54882 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:28:34.997946   54882 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:28:35.008273   54882 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0213 23:28:35.026666   54882 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0213 23:28:35.043970   54882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0213 23:28:35.061333   54882 ssh_runner.go:195] Run: grep 192.168.50.143	control-plane.minikube.internal$ /etc/hosts
	I0213 23:28:35.065747   54882 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:28:35.078999   54882 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411 for IP: 192.168.50.143
	I0213 23:28:35.079031   54882 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.079263   54882 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:28:35.079335   54882 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:28:35.079399   54882 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.key
	I0213 23:28:35.079429   54882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.crt with IP's: []
	I0213 23:28:35.234759   54882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.crt ...
	I0213 23:28:35.234790   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.crt: {Name:mk94055b333b6288e803e6b2e2c28ca5abf1a834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.234988   54882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.key ...
	I0213 23:28:35.235008   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/client.key: {Name:mkc2ac42c12eda45b41eef71ac6f8a59b04d0350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.235136   54882 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key.3c7b2e1c
	I0213 23:28:35.235156   54882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt.3c7b2e1c with IP's: [192.168.50.143 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 23:28:35.523603   54882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt.3c7b2e1c ...
	I0213 23:28:35.523638   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt.3c7b2e1c: {Name:mkf2c760e7bb80daec9f0335a163132e6a7bcb07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.523838   54882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key.3c7b2e1c ...
	I0213 23:28:35.523853   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key.3c7b2e1c: {Name:mk474d3e22010f2b5061eb5eb8c4e897688e0e3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.523947   54882 certs.go:337] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt.3c7b2e1c -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt
	I0213 23:28:35.524013   54882 certs.go:341] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key.3c7b2e1c -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key
	I0213 23:28:35.524062   54882 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.key
	I0213 23:28:35.524076   54882 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.crt with IP's: []
	I0213 23:28:35.685618   54882 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.crt ...
	I0213 23:28:35.685649   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.crt: {Name:mk265871faa37f1aa127cfb86a5fc8446bb663e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.709493   54882 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.key ...
	I0213 23:28:35.709528   54882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.key: {Name:mk648c84359867bdbe9b4169f6526791e311338f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:28:35.709739   54882 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:28:35.709790   54882 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:28:35.709804   54882 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:28:35.709834   54882 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:28:35.709856   54882 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:28:35.709912   54882 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:28:35.709955   54882 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:28:35.710564   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:28:35.737384   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0213 23:28:35.763009   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:28:35.791717   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/newest-cni-120411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0213 23:28:35.818284   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:28:35.844849   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:28:35.872041   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:28:35.900141   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:28:35.927018   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:28:35.953194   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:28:35.983689   54882 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:28:36.010460   54882 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:28:36.028644   54882 ssh_runner.go:195] Run: openssl version
	I0213 23:28:36.034721   54882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:28:36.044970   54882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:28:36.050395   54882 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:28:36.050470   54882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:28:36.057011   54882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:28:36.068131   54882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:28:36.079051   54882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:28:36.084745   54882 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:28:36.084809   54882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:28:36.092779   54882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:28:36.107205   54882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:28:36.121353   54882 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:28:36.128958   54882 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:28:36.129014   54882 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:28:36.136076   54882 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:28:36.152371   54882 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:28:36.158998   54882 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 23:28:36.159069   54882 kubeadm.go:404] StartCluster: {Name:newest-cni-120411 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-120411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.143 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:28:36.159158   54882 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:28:36.159283   54882 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:28:36.209698   54882 cri.go:89] found id: ""
	I0213 23:28:36.209754   54882 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:28:36.220256   54882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:28:36.231263   54882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:28:36.241863   54882 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:28:36.241944   54882 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:28:36.372154   54882 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0213 23:28:36.372290   54882 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:28:36.743289   54882 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:28:36.743433   54882 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:28:36.743554   54882 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:28:37.020566   54882 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:28:37.061217   54882 out.go:204]   - Generating certificates and keys ...
	I0213 23:28:37.061345   54882 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:28:37.061439   54882 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:08:00 UTC, ends at Tue 2024-02-13 23:28:37 UTC. --
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.802487383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866917802470692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=219532da-0884-474c-b397-f1bcc501353c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.803316763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=247110f7-a15b-4b91-a0e5-592cd3dae43c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.803394741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=247110f7-a15b-4b91-a0e5-592cd3dae43c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.803590620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=247110f7-a15b-4b91-a0e5-592cd3dae43c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.853057402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=caad7c76-3042-48dd-bb42-3e45036d354e name=/runtime.v1.RuntimeService/Version
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.853115110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=caad7c76-3042-48dd-bb42-3e45036d354e name=/runtime.v1.RuntimeService/Version
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.855173335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2118b0c4-78bf-4aa1-bbe7-e21af15c9830 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.855583335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866917855570264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=2118b0c4-78bf-4aa1-bbe7-e21af15c9830 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.856193159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aeabd043-c297-4d7e-b53a-b1b6f48c3027 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.856268382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aeabd043-c297-4d7e-b53a-b1b6f48c3027 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.856441351Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aeabd043-c297-4d7e-b53a-b1b6f48c3027 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.908972070Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=acd8c987-3711-4d8f-8dd3-83e6e2017f32 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.909079274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=acd8c987-3711-4d8f-8dd3-83e6e2017f32 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.911032773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b3100624-8c9a-44a9-8370-0985f434bca2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.911495546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866917911478013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=b3100624-8c9a-44a9-8370-0985f434bca2 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.912318977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=34b723ef-e4d7-46cd-bc88-a337c6fb900f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.912412574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=34b723ef-e4d7-46cd-bc88-a337c6fb900f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.912647478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=34b723ef-e4d7-46cd-bc88-a337c6fb900f name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.956384871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e6ce2afd-2a1f-495c-92dc-83686a30a685 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.956444414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e6ce2afd-2a1f-495c-92dc-83686a30a685 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.957836805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=23bad4aa-eeef-4f21-b8ba-44124e7241e9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.958246953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707866917958232482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=23bad4aa-eeef-4f21-b8ba-44124e7241e9 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.958976188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6252f5c7-101a-45cd-9b4e-0a0fcd76fa42 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.959022827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6252f5c7-101a-45cd-9b4e-0a0fcd76fa42 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:28:37 no-preload-778731 crio[728]: time="2024-02-13 23:28:37.959350149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762,PodSandboxId:6a68248c9129a48242333cf9faf4d480a0b10eb48de3b56e9c23e1383383d4bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1707866024819559124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5751d5c1-158a-46dc-b2ec-f74cc302de35,},Annotations:map[string]string{io.kubernetes.container.hash: a57147ec,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2,PodSandboxId:451b01d1f17f7e02b82bf2a0ef596bc5ce290e615be23968edc43882534ea2f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1707866024197508961,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-f4g5w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ddbeb6e-f3b0-48d8-82f1-f824568835c7,},Annotations:map[string]string{io.kubernetes.container.hash: 8b800d5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c,PodSandboxId:22add251089203c4ba2c66a4dd080356b4223e37b9a3ad81148e0dd4d44cea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1707866023133639170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7vcqq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 18dc29be-3e93-4a62-ad66-7838671cdd21,},Annotations:map[string]string{io.kubernetes.container.hash: c4033064,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a,PodSandboxId:fe5100663112a67affcaa674177b1347488b3eb472196a35ff6b5e69400efc96,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1707866000217113187,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e2ebe5d005dd208c52563e80c776269,},Annot
ations:map[string]string{io.kubernetes.container.hash: f3f33574,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2,PodSandboxId:f8a4c800f4dbe36ea1adb202f3bf8bfede26b7208521f3e3085c3a4e2de577bb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:1707866000109625385,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e9227a504dcf2941a6e823698ac7024,},Annotations:map[
string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2,PodSandboxId:23451f1f071ca46f98287d075c521080b8687b0213a4538960af4f770e64ca10,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1707865999940983376,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030cb6e1e835f7ec673fcbde35b715e3,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 17879330,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab,PodSandboxId:1f1128048efed55924db609b832b9cf54089f1a81e35bd85101f1bdc26567c24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1707865999767471340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-778731,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934504f9da4bbd5a965a4e20bc17e9ac,},An
notations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6252f5c7-101a-45cd-9b4e-0a0fcd76fa42 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	032daf7e93d06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   6a68248c9129a       storage-provisioner
	bb7a89704fa24       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   451b01d1f17f7       coredns-76f75df574-f4g5w
	6b12d9bbcaaf9       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   22add25108920       kube-proxy-7vcqq
	75e6b925f0095       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   15 minutes ago      Running             etcd                      2                   fe5100663112a       etcd-no-preload-778731
	f193476ba382c       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   15 minutes ago      Running             kube-scheduler            2                   f8a4c800f4dbe       kube-scheduler-no-preload-778731
	a14e489a0cbc6       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   15 minutes ago      Running             kube-apiserver            2                   23451f1f071ca       kube-apiserver-no-preload-778731
	1bbf42830ebf1       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   15 minutes ago      Running             kube-controller-manager   2                   1f1128048efed       kube-controller-manager-no-preload-778731
	
	
	==> coredns [bb7a89704fa24a3af5b21a91fa2e0ac0b4bcc10639f5417eef1c34d25d3290c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55809 - 17046 "HINFO IN 6020288737557843742.2579504881925505847. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009476941s
	
	
	==> describe nodes <==
	Name:               no-preload-778731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-778731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=no-preload-778731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_13_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:13:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-778731
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:28:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:24:00 +0000   Tue, 13 Feb 2024 23:13:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.31
	  Hostname:    no-preload-778731
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ad4a7d6c34b4e29947628a783208913
	  System UUID:                5ad4a7d6-c34b-4e29-9476-28a783208913
	  Boot ID:                    945b7cbf-253c-4566-ad72-aa54f0f30632
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-f4g5w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-778731                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-778731             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-778731    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-7vcqq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-778731             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-mt6qd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x9 over 15m)  kubelet          Node no-preload-778731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x7 over 15m)  kubelet          Node no-preload-778731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-778731 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-778731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-778731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-778731 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node no-preload-778731 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeReady                15m                kubelet          Node no-preload-778731 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-778731 event: Registered Node no-preload-778731 in Controller
	
	
	==> dmesg <==
	[Feb13 23:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070290] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.408562] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.388611] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[Feb13 23:08] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.590058] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.729248] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.116953] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.176176] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.136905] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.261295] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +29.283296] systemd-fstab-generator[1340]: Ignoring "noauto" for root device
	[ +19.333544] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 23:09] hrtimer: interrupt took 2796717 ns
	[Feb13 23:13] systemd-fstab-generator[3985]: Ignoring "noauto" for root device
	[ +10.320444] systemd-fstab-generator[4316]: Ignoring "noauto" for root device
	[ +14.765805] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [75e6b925f0095c06b164e57564185f9ba4f91d83c57629df4ab1249ad44e2f6a] <==
	{"level":"info","ts":"2024-02-13T23:13:23.153997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:23.154045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 received MsgPreVoteResp from 1a7f054d9a9436d0 at term 1"}
	{"level":"info","ts":"2024-02-13T23:13:23.154084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 became candidate at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.154109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 received MsgVoteResp from 1a7f054d9a9436d0 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.154136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1a7f054d9a9436d0 became leader at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.154161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1a7f054d9a9436d0 elected leader 1a7f054d9a9436d0 at term 2"}
	{"level":"info","ts":"2024-02-13T23:13:23.155912Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157291Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"1a7f054d9a9436d0","local-member-attributes":"{Name:no-preload-778731 ClientURLs:[https://192.168.83.31:2379]}","request-path":"/0/members/1a7f054d9a9436d0/attributes","cluster-id":"bdb46277f8bc3ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-13T23:13:23.157389Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:23.157801Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bdb46277f8bc3ba","local-member-id":"1a7f054d9a9436d0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157947Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:23.157987Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:23.16002Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.31:2379"}
	{"level":"info","ts":"2024-02-13T23:13:23.162012Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:13:23.162088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:23.171293Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:23:23.213068Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-02-13T23:23:23.216527Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.796434ms","hash":4246754320}
	{"level":"info","ts":"2024-02-13T23:23:23.216585Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4246754320,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-02-13T23:28:23.221724Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-02-13T23:28:23.22518Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":957,"took":"2.914486ms","hash":1926341442}
	{"level":"info","ts":"2024-02-13T23:28:23.225271Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1926341442,"revision":957,"compact-revision":714}
	{"level":"info","ts":"2024-02-13T23:28:36.674444Z","caller":"traceutil/trace.go:171","msg":"trace[472013255] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"144.190762ms","start":"2024-02-13T23:28:36.530204Z","end":"2024-02-13T23:28:36.674395Z","steps":["trace[472013255] 'process raft request'  (duration: 143.990987ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:28:37.042133Z","caller":"traceutil/trace.go:171","msg":"trace[622443530] transaction","detail":"{read_only:false; response_revision:1212; number_of_response:1; }","duration":"107.98105ms","start":"2024-02-13T23:28:36.934131Z","end":"2024-02-13T23:28:37.042113Z","steps":["trace[622443530] 'process raft request'  (duration: 99.751714ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:28:38 up 20 min,  0 users,  load average: 0.24, 0.40, 0.37
	Linux no-preload-778731 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a14e489a0cbc61a245fdcf34f07e711ad5138f292b8e99f2b4bb1f13c20eabe2] <==
	I0213 23:23:25.906087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:24:25.905078       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:25.905176       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:24:25.905192       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:24:25.906251       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:24:25.906397       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:24:25.906443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:26:25.905625       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:25.906003       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:26:25.906093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:26:25.906833       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:26:25.906939       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:26:25.907118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:28:24.909103       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:24.909230       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0213 23:28:25.909461       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:25.909591       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:28:25.909625       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:28:25.909879       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:25.910057       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:28:25.911301       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1bbf42830ebf1ad34b0c6c8d55712af1aa68899cdb37dcb8f13dbdd75de8bfab] <==
	I0213 23:22:40.888326       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:10.391953       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:10.898255       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:23:40.398195       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:23:40.908417       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:10.405788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:10.917762       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:24:40.411950       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:24:40.928846       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0213 23:24:51.885014       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="120.037µs"
	I0213 23:25:05.886387       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.403µs"
	E0213 23:25:10.418765       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:10.936727       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:25:40.424403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:40.947164       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:10.430525       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:10.958383       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:40.437614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:40.967762       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:10.443934       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:10.977893       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:40.450722       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:40.987287       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:28:10.456901       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:28:10.999653       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6b12d9bbcaaf9e6a2c830e2de677e5fc7e6378d9b76206239441f2ab8fb3e01c] <==
	I0213 23:13:44.406158       1 server_others.go:72] "Using iptables proxy"
	I0213 23:13:44.424881       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.83.31"]
	I0213 23:13:44.543165       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0213 23:13:44.543258       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 23:13:44.543290       1 server_others.go:168] "Using iptables Proxier"
	I0213 23:13:44.552756       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:13:44.553029       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0213 23:13:44.553079       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:13:44.554400       1 config.go:188] "Starting service config controller"
	I0213 23:13:44.554461       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:13:44.554504       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:13:44.554521       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:13:44.559201       1 config.go:315] "Starting node config controller"
	I0213 23:13:44.559448       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:13:44.655000       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:13:44.655131       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:13:44.661331       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f193476ba382cbc25194706bd846aee2e1115d6d206184623ccbf81353d4d2f2] <==
	W0213 23:13:25.816569       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0213 23:13:25.816627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0213 23:13:25.853391       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0213 23:13:25.853459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0213 23:13:25.861413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:13:25.861470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:13:25.903772       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:13:25.903839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0213 23:13:25.929051       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:13:25.929160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:13:26.065142       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:26.065283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:26.088108       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:26.088206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:26.088452       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:13:26.088589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:13:26.143971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:26.144094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:26.282998       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:13:26.283110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:13:26.365334       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0213 23:13:26.365416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0213 23:13:26.441149       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:13:26.441208       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0213 23:13:28.211800       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:08:00 UTC, ends at Tue 2024-02-13 23:28:38 UTC. --
	Feb 13 23:26:00 no-preload-778731 kubelet[4323]: E0213 23:26:00.868998    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:12 no-preload-778731 kubelet[4323]: E0213 23:26:12.867434    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:23 no-preload-778731 kubelet[4323]: E0213 23:26:23.867869    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]: E0213 23:26:28.931966    4323 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:26:28 no-preload-778731 kubelet[4323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:26:37 no-preload-778731 kubelet[4323]: E0213 23:26:37.867293    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:26:51 no-preload-778731 kubelet[4323]: E0213 23:26:51.867594    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:03 no-preload-778731 kubelet[4323]: E0213 23:27:03.867857    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:14 no-preload-778731 kubelet[4323]: E0213 23:27:14.867869    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:28 no-preload-778731 kubelet[4323]: E0213 23:27:28.931916    4323 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:27:28 no-preload-778731 kubelet[4323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:27:28 no-preload-778731 kubelet[4323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:27:28 no-preload-778731 kubelet[4323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:27:29 no-preload-778731 kubelet[4323]: E0213 23:27:29.871204    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:43 no-preload-778731 kubelet[4323]: E0213 23:27:43.867818    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:27:58 no-preload-778731 kubelet[4323]: E0213 23:27:58.868189    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:28:09 no-preload-778731 kubelet[4323]: E0213 23:28:09.868154    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:28:23 no-preload-778731 kubelet[4323]: E0213 23:28:23.868051    4323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mt6qd" podUID="9726753d-b785-48dc-81d7-86a787851927"
	Feb 13 23:28:28 no-preload-778731 kubelet[4323]: E0213 23:28:28.890488    4323 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Feb 13 23:28:28 no-preload-778731 kubelet[4323]: E0213 23:28:28.934891    4323 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:28:28 no-preload-778731 kubelet[4323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:28:28 no-preload-778731 kubelet[4323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:28:28 no-preload-778731 kubelet[4323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [032daf7e93d06fa5fdaffe723fa3cfe2a1bdb3c7f3c9af68f5ba3decaa012762] <==
	I0213 23:13:44.945554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:13:44.968933       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:13:44.970209       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:13:45.009329       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:13:45.010789       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d1e291c-d674-40de-b9a3-332e6609a44e", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-778731_0721ce8d-b48f-4bfa-b7c5-5083713ebd84 became leader
	I0213 23:13:45.011115       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-778731_0721ce8d-b48f-4bfa-b7c5-5083713ebd84!
	I0213 23:13:45.112155       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-778731_0721ce8d-b48f-4bfa-b7c5-5083713ebd84!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-778731 -n no-preload-778731
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-778731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mt6qd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-778731 describe pod metrics-server-57f55c9bc5-mt6qd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-778731 describe pod metrics-server-57f55c9bc5-mt6qd: exit status 1 (76.971443ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mt6qd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-778731 describe pod metrics-server-57f55c9bc5-mt6qd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (74.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (169.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
E0213 23:30:40.453621   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-02-13 23:30:40.742779571 +0000 UTC m=+5660.357553486
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-083863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.258µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-083863 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-083863 logs -n 25
E0213 23:30:41.209542   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-083863 logs -n 25: (1.509590357s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p old-k8s-version-245122             | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-778731                  | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| start   | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:03 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-340656                 | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC | 13 Feb 24 23:18 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-083863       | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-083863 | jenkins | v1.32.0 | 13 Feb 24 23:05 UTC | 13 Feb 24 23:18 UTC |
	|         | default-k8s-diff-port-083863                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-245122                              | old-k8s-version-245122       | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:28 UTC |
	| start   | -p newest-cni-120411 --memory=2200 --alsologtostderr   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:29 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-778731                                   | no-preload-778731            | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:28 UTC |
	| start   | -p auto-397221 --memory=3072                           | auto-397221                  | jenkins | v1.32.0 | 13 Feb 24 23:28 UTC | 13 Feb 24 23:30 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-120411             | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-120411                                   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-120411                  | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-120411 --memory=2200 --alsologtostderr   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:30 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-340656                                  | embed-certs-340656           | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC | 13 Feb 24 23:29 UTC |
	| start   | -p kindnet-397221                                      | kindnet-397221               | jenkins | v1.32.0 | 13 Feb 24 23:29 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-120411 image list                           | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC | 13 Feb 24 23:30 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-120411                                   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC | 13 Feb 24 23:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-120411                                   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC | 13 Feb 24 23:30 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-120411                                   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC | 13 Feb 24 23:30 UTC |
	| delete  | -p newest-cni-120411                                   | newest-cni-120411            | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC | 13 Feb 24 23:30 UTC |
	| start   | -p calico-397221 --memory=3072                         | calico-397221                | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-397221 pgrep -a                                | auto-397221                  | jenkins | v1.32.0 | 13 Feb 24 23:30 UTC | 13 Feb 24 23:30 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 23:30:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 23:30:09.461332   56875 out.go:291] Setting OutFile to fd 1 ...
	I0213 23:30:09.461454   56875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:30:09.461466   56875 out.go:304] Setting ErrFile to fd 2...
	I0213 23:30:09.461471   56875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 23:30:09.461708   56875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 23:30:09.462388   56875 out.go:298] Setting JSON to false
	I0213 23:30:09.463360   56875 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7961,"bootTime":1707859049,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 23:30:09.463418   56875 start.go:138] virtualization: kvm guest
	I0213 23:30:09.466020   56875 out.go:177] * [calico-397221] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 23:30:09.467538   56875 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 23:30:09.467567   56875 notify.go:220] Checking for updates...
	I0213 23:30:09.469960   56875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 23:30:09.471539   56875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 23:30:09.473184   56875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:30:09.474487   56875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 23:30:09.475758   56875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 23:30:09.477520   56875 config.go:182] Loaded profile config "auto-397221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:30:09.477671   56875 config.go:182] Loaded profile config "default-k8s-diff-port-083863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:30:09.477765   56875 config.go:182] Loaded profile config "kindnet-397221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:30:09.477851   56875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 23:30:09.515298   56875 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 23:30:09.516577   56875 start.go:298] selected driver: kvm2
	I0213 23:30:09.516595   56875 start.go:902] validating driver "kvm2" against <nil>
	I0213 23:30:09.516606   56875 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 23:30:09.517398   56875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:30:09.517470   56875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 23:30:09.533698   56875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 23:30:09.533747   56875 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 23:30:09.534028   56875 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0213 23:30:09.534103   56875 cni.go:84] Creating CNI manager for "calico"
	I0213 23:30:09.534128   56875 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0213 23:30:09.534152   56875 start_flags.go:321] config:
	{Name:calico-397221 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-397221 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:30:09.534326   56875 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 23:30:09.536168   56875 out.go:177] * Starting control plane node calico-397221 in cluster calico-397221
	I0213 23:30:06.232153   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:08.732007   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:07.592355   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:07.592850   56231 main.go:141] libmachine: (kindnet-397221) DBG | unable to find current IP address of domain kindnet-397221 in network mk-kindnet-397221
	I0213 23:30:07.592874   56231 main.go:141] libmachine: (kindnet-397221) DBG | I0213 23:30:07.592810   56265 retry.go:31] will retry after 3.869044496s: waiting for machine to come up
	I0213 23:30:11.464555   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:11.465166   56231 main.go:141] libmachine: (kindnet-397221) DBG | unable to find current IP address of domain kindnet-397221 in network mk-kindnet-397221
	I0213 23:30:11.465196   56231 main.go:141] libmachine: (kindnet-397221) DBG | I0213 23:30:11.465118   56265 retry.go:31] will retry after 5.182283314s: waiting for machine to come up
	I0213 23:30:09.537359   56875 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:30:09.537395   56875 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0213 23:30:09.537407   56875 cache.go:56] Caching tarball of preloaded images
	I0213 23:30:09.537486   56875 preload.go:174] Found /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0213 23:30:09.537501   56875 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0213 23:30:09.537589   56875 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/calico-397221/config.json ...
	I0213 23:30:09.537613   56875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/calico-397221/config.json: {Name:mke39b7204784a12f4ff13976273d997d45a0df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:09.537779   56875 start.go:365] acquiring machines lock for calico-397221: {Name:mk20d95819b2bb2be508d2c38d817d9c9b46307a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0213 23:30:10.732707   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:13.231068   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:15.231637   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:18.191837   56875 start.go:369] acquired machines lock for "calico-397221" in 8.65399221s
	I0213 23:30:18.191924   56875 start.go:93] Provisioning new machine with config: &{Name:calico-397221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:calico-397221 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0213 23:30:18.192362   56875 start.go:125] createHost starting for "" (driver="kvm2")
	I0213 23:30:18.194340   56875 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0213 23:30:18.194622   56875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 23:30:18.194661   56875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 23:30:18.211109   56875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40597
	I0213 23:30:18.211516   56875 main.go:141] libmachine: () Calling .GetVersion
	I0213 23:30:18.212038   56875 main.go:141] libmachine: Using API Version  1
	I0213 23:30:18.212061   56875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 23:30:18.212479   56875 main.go:141] libmachine: () Calling .GetMachineName
	I0213 23:30:18.212661   56875 main.go:141] libmachine: (calico-397221) Calling .GetMachineName
	I0213 23:30:18.212829   56875 main.go:141] libmachine: (calico-397221) Calling .DriverName
	I0213 23:30:18.212989   56875 start.go:159] libmachine.API.Create for "calico-397221" (driver="kvm2")
	I0213 23:30:18.213018   56875 client.go:168] LocalClient.Create starting
	I0213 23:30:18.213056   56875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem
	I0213 23:30:18.213108   56875 main.go:141] libmachine: Decoding PEM data...
	I0213 23:30:18.213132   56875 main.go:141] libmachine: Parsing certificate...
	I0213 23:30:18.213208   56875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem
	I0213 23:30:18.213237   56875 main.go:141] libmachine: Decoding PEM data...
	I0213 23:30:18.213255   56875 main.go:141] libmachine: Parsing certificate...
	I0213 23:30:18.213284   56875 main.go:141] libmachine: Running pre-create checks...
	I0213 23:30:18.213295   56875 main.go:141] libmachine: (calico-397221) Calling .PreCreateCheck
	I0213 23:30:18.213675   56875 main.go:141] libmachine: (calico-397221) Calling .GetConfigRaw
	I0213 23:30:18.214071   56875 main.go:141] libmachine: Creating machine...
	I0213 23:30:18.214085   56875 main.go:141] libmachine: (calico-397221) Calling .Create
	I0213 23:30:18.214231   56875 main.go:141] libmachine: (calico-397221) Creating KVM machine...
	I0213 23:30:18.215482   56875 main.go:141] libmachine: (calico-397221) DBG | found existing default KVM network
	I0213 23:30:18.216685   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:18.216525   56949 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:23:2b} reservation:<nil>}
	I0213 23:30:18.217892   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:18.217797   56949 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027e800}
	I0213 23:30:18.223154   56875 main.go:141] libmachine: (calico-397221) DBG | trying to create private KVM network mk-calico-397221 192.168.50.0/24...
	I0213 23:30:18.311058   56875 main.go:141] libmachine: (calico-397221) Setting up store path in /home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221 ...
	I0213 23:30:18.311084   56875 main.go:141] libmachine: (calico-397221) DBG | private KVM network mk-calico-397221 192.168.50.0/24 created
	I0213 23:30:18.311095   56875 main.go:141] libmachine: (calico-397221) Building disk image from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 23:30:18.311114   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:18.310995   56949 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:30:18.311183   56875 main.go:141] libmachine: (calico-397221) Downloading /home/jenkins/minikube-integration/18171-8990/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0213 23:30:18.534016   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:18.533860   56949 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221/id_rsa...
	I0213 23:30:18.719523   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:18.719340   56949 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221/calico-397221.rawdisk...
	I0213 23:30:18.719570   56875 main.go:141] libmachine: (calico-397221) DBG | Writing magic tar header
	I0213 23:30:18.719620   56875 main.go:141] libmachine: (calico-397221) DBG | Writing SSH key tar header
	I0213 23:30:18.719642   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:18.719485   56949 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221 ...
	I0213 23:30:18.719658   56875 main.go:141] libmachine: (calico-397221) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221 (perms=drwx------)
	I0213 23:30:18.719676   56875 main.go:141] libmachine: (calico-397221) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube/machines (perms=drwxr-xr-x)
	I0213 23:30:18.719687   56875 main.go:141] libmachine: (calico-397221) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990/.minikube (perms=drwxr-xr-x)
	I0213 23:30:18.719698   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221
	I0213 23:30:18.719710   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube/machines
	I0213 23:30:18.719718   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 23:30:18.719725   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18171-8990
	I0213 23:30:18.719732   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0213 23:30:18.719743   56875 main.go:141] libmachine: (calico-397221) Setting executable bit set on /home/jenkins/minikube-integration/18171-8990 (perms=drwxrwxr-x)
	I0213 23:30:18.719749   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home/jenkins
	I0213 23:30:18.719756   56875 main.go:141] libmachine: (calico-397221) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0213 23:30:18.719764   56875 main.go:141] libmachine: (calico-397221) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0213 23:30:18.719769   56875 main.go:141] libmachine: (calico-397221) Creating domain...
	I0213 23:30:18.719779   56875 main.go:141] libmachine: (calico-397221) DBG | Checking permissions on dir: /home
	I0213 23:30:18.719784   56875 main.go:141] libmachine: (calico-397221) DBG | Skipping /home - not owner
	I0213 23:30:18.720966   56875 main.go:141] libmachine: (calico-397221) define libvirt domain using xml: 
	I0213 23:30:18.720993   56875 main.go:141] libmachine: (calico-397221) <domain type='kvm'>
	I0213 23:30:18.721001   56875 main.go:141] libmachine: (calico-397221)   <name>calico-397221</name>
	I0213 23:30:18.721007   56875 main.go:141] libmachine: (calico-397221)   <memory unit='MiB'>3072</memory>
	I0213 23:30:18.721013   56875 main.go:141] libmachine: (calico-397221)   <vcpu>2</vcpu>
	I0213 23:30:18.721018   56875 main.go:141] libmachine: (calico-397221)   <features>
	I0213 23:30:18.721029   56875 main.go:141] libmachine: (calico-397221)     <acpi/>
	I0213 23:30:18.721034   56875 main.go:141] libmachine: (calico-397221)     <apic/>
	I0213 23:30:18.721040   56875 main.go:141] libmachine: (calico-397221)     <pae/>
	I0213 23:30:18.721045   56875 main.go:141] libmachine: (calico-397221)     
	I0213 23:30:18.721052   56875 main.go:141] libmachine: (calico-397221)   </features>
	I0213 23:30:18.721060   56875 main.go:141] libmachine: (calico-397221)   <cpu mode='host-passthrough'>
	I0213 23:30:18.721065   56875 main.go:141] libmachine: (calico-397221)   
	I0213 23:30:18.721073   56875 main.go:141] libmachine: (calico-397221)   </cpu>
	I0213 23:30:18.721080   56875 main.go:141] libmachine: (calico-397221)   <os>
	I0213 23:30:18.721085   56875 main.go:141] libmachine: (calico-397221)     <type>hvm</type>
	I0213 23:30:18.721092   56875 main.go:141] libmachine: (calico-397221)     <boot dev='cdrom'/>
	I0213 23:30:18.721099   56875 main.go:141] libmachine: (calico-397221)     <boot dev='hd'/>
	I0213 23:30:18.721112   56875 main.go:141] libmachine: (calico-397221)     <bootmenu enable='no'/>
	I0213 23:30:18.721120   56875 main.go:141] libmachine: (calico-397221)   </os>
	I0213 23:30:18.721131   56875 main.go:141] libmachine: (calico-397221)   <devices>
	I0213 23:30:18.721141   56875 main.go:141] libmachine: (calico-397221)     <disk type='file' device='cdrom'>
	I0213 23:30:18.721160   56875 main.go:141] libmachine: (calico-397221)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221/boot2docker.iso'/>
	I0213 23:30:18.721171   56875 main.go:141] libmachine: (calico-397221)       <target dev='hdc' bus='scsi'/>
	I0213 23:30:18.721177   56875 main.go:141] libmachine: (calico-397221)       <readonly/>
	I0213 23:30:18.721184   56875 main.go:141] libmachine: (calico-397221)     </disk>
	I0213 23:30:18.721196   56875 main.go:141] libmachine: (calico-397221)     <disk type='file' device='disk'>
	I0213 23:30:18.721211   56875 main.go:141] libmachine: (calico-397221)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0213 23:30:18.721230   56875 main.go:141] libmachine: (calico-397221)       <source file='/home/jenkins/minikube-integration/18171-8990/.minikube/machines/calico-397221/calico-397221.rawdisk'/>
	I0213 23:30:18.721243   56875 main.go:141] libmachine: (calico-397221)       <target dev='hda' bus='virtio'/>
	I0213 23:30:18.721256   56875 main.go:141] libmachine: (calico-397221)     </disk>
	I0213 23:30:18.721268   56875 main.go:141] libmachine: (calico-397221)     <interface type='network'>
	I0213 23:30:18.721282   56875 main.go:141] libmachine: (calico-397221)       <source network='mk-calico-397221'/>
	I0213 23:30:18.721294   56875 main.go:141] libmachine: (calico-397221)       <model type='virtio'/>
	I0213 23:30:18.721308   56875 main.go:141] libmachine: (calico-397221)     </interface>
	I0213 23:30:18.721320   56875 main.go:141] libmachine: (calico-397221)     <interface type='network'>
	I0213 23:30:18.721338   56875 main.go:141] libmachine: (calico-397221)       <source network='default'/>
	I0213 23:30:18.721352   56875 main.go:141] libmachine: (calico-397221)       <model type='virtio'/>
	I0213 23:30:18.721362   56875 main.go:141] libmachine: (calico-397221)     </interface>
	I0213 23:30:18.721373   56875 main.go:141] libmachine: (calico-397221)     <serial type='pty'>
	I0213 23:30:18.721391   56875 main.go:141] libmachine: (calico-397221)       <target port='0'/>
	I0213 23:30:18.721399   56875 main.go:141] libmachine: (calico-397221)     </serial>
	I0213 23:30:18.721411   56875 main.go:141] libmachine: (calico-397221)     <console type='pty'>
	I0213 23:30:18.721425   56875 main.go:141] libmachine: (calico-397221)       <target type='serial' port='0'/>
	I0213 23:30:18.721434   56875 main.go:141] libmachine: (calico-397221)     </console>
	I0213 23:30:18.721442   56875 main.go:141] libmachine: (calico-397221)     <rng model='virtio'>
	I0213 23:30:18.721456   56875 main.go:141] libmachine: (calico-397221)       <backend model='random'>/dev/random</backend>
	I0213 23:30:18.721472   56875 main.go:141] libmachine: (calico-397221)     </rng>
	I0213 23:30:18.721482   56875 main.go:141] libmachine: (calico-397221)     
	I0213 23:30:18.721500   56875 main.go:141] libmachine: (calico-397221)     
	I0213 23:30:18.721511   56875 main.go:141] libmachine: (calico-397221)   </devices>
	I0213 23:30:18.721522   56875 main.go:141] libmachine: (calico-397221) </domain>
	I0213 23:30:18.721537   56875 main.go:141] libmachine: (calico-397221) 
	I0213 23:30:18.725650   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:a0:1c:c5 in network default
	I0213 23:30:18.726270   56875 main.go:141] libmachine: (calico-397221) Ensuring networks are active...
	I0213 23:30:18.726302   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:18.727141   56875 main.go:141] libmachine: (calico-397221) Ensuring network default is active
	I0213 23:30:18.727602   56875 main.go:141] libmachine: (calico-397221) Ensuring network mk-calico-397221 is active
	I0213 23:30:18.728248   56875 main.go:141] libmachine: (calico-397221) Getting domain xml...
	I0213 23:30:18.729286   56875 main.go:141] libmachine: (calico-397221) Creating domain...
	I0213 23:30:16.648671   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.649268   56231 main.go:141] libmachine: (kindnet-397221) Found IP for machine: 192.168.61.97
	I0213 23:30:16.649295   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has current primary IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.649305   56231 main.go:141] libmachine: (kindnet-397221) Reserving static IP address...
	I0213 23:30:16.649665   56231 main.go:141] libmachine: (kindnet-397221) DBG | unable to find host DHCP lease matching {name: "kindnet-397221", mac: "52:54:00:b2:84:75", ip: "192.168.61.97"} in network mk-kindnet-397221
	I0213 23:30:16.732974   56231 main.go:141] libmachine: (kindnet-397221) DBG | Getting to WaitForSSH function...
	I0213 23:30:16.733004   56231 main.go:141] libmachine: (kindnet-397221) Reserved static IP address: 192.168.61.97
	I0213 23:30:16.733018   56231 main.go:141] libmachine: (kindnet-397221) Waiting for SSH to be available...
	I0213 23:30:16.736175   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.736627   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:16.736659   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.736826   56231 main.go:141] libmachine: (kindnet-397221) DBG | Using SSH client type: external
	I0213 23:30:16.736855   56231 main.go:141] libmachine: (kindnet-397221) DBG | Using SSH private key: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/kindnet-397221/id_rsa (-rw-------)
	I0213 23:30:16.736892   56231 main.go:141] libmachine: (kindnet-397221) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18171-8990/.minikube/machines/kindnet-397221/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0213 23:30:16.736912   56231 main.go:141] libmachine: (kindnet-397221) DBG | About to run SSH command:
	I0213 23:30:16.736930   56231 main.go:141] libmachine: (kindnet-397221) DBG | exit 0
	I0213 23:30:16.830085   56231 main.go:141] libmachine: (kindnet-397221) DBG | SSH cmd err, output: <nil>: 
	I0213 23:30:16.830384   56231 main.go:141] libmachine: (kindnet-397221) KVM machine creation complete!
	I0213 23:30:16.830725   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetConfigRaw
	I0213 23:30:16.831246   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:16.831457   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:16.831621   56231 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0213 23:30:16.831649   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetState
	I0213 23:30:16.832829   56231 main.go:141] libmachine: Detecting operating system of created instance...
	I0213 23:30:16.832848   56231 main.go:141] libmachine: Waiting for SSH to be available...
	I0213 23:30:16.832856   56231 main.go:141] libmachine: Getting to WaitForSSH function...
	I0213 23:30:16.832863   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:16.835132   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.835528   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:16.835578   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.835711   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:16.835888   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:16.836062   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:16.836203   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:16.836369   56231 main.go:141] libmachine: Using SSH client type: native
	I0213 23:30:16.836706   56231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0213 23:30:16.836719   56231 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0213 23:30:16.953332   56231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:30:16.953364   56231 main.go:141] libmachine: Detecting the provisioner...
	I0213 23:30:16.953376   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:16.956363   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.956776   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:16.956812   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:16.957064   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:16.957294   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:16.957457   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:16.957588   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:16.957780   56231 main.go:141] libmachine: Using SSH client type: native
	I0213 23:30:16.958228   56231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0213 23:30:16.958244   56231 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0213 23:30:17.074965   56231 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0213 23:30:17.075074   56231 main.go:141] libmachine: found compatible host: buildroot
	I0213 23:30:17.075091   56231 main.go:141] libmachine: Provisioning with buildroot...
	I0213 23:30:17.075104   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetMachineName
	I0213 23:30:17.075373   56231 buildroot.go:166] provisioning hostname "kindnet-397221"
	I0213 23:30:17.075408   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetMachineName
	I0213 23:30:17.075594   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:17.078200   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.078558   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.078590   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.078716   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:17.078911   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.079048   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.079207   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:17.079357   56231 main.go:141] libmachine: Using SSH client type: native
	I0213 23:30:17.079726   56231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0213 23:30:17.079741   56231 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-397221 && echo "kindnet-397221" | sudo tee /etc/hostname
	I0213 23:30:17.207272   56231 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-397221
	
	I0213 23:30:17.207300   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:17.210091   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.210488   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.210543   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.210676   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:17.210871   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.211013   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.211132   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:17.211302   56231 main.go:141] libmachine: Using SSH client type: native
	I0213 23:30:17.211609   56231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0213 23:30:17.211625   56231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-397221' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-397221/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-397221' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0213 23:30:17.339328   56231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0213 23:30:17.339363   56231 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18171-8990/.minikube CaCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18171-8990/.minikube}
	I0213 23:30:17.339408   56231 buildroot.go:174] setting up certificates
	I0213 23:30:17.339424   56231 provision.go:83] configureAuth start
	I0213 23:30:17.339440   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetMachineName
	I0213 23:30:17.339739   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetIP
	I0213 23:30:17.342683   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.343056   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.343087   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.343219   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:17.346112   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.346431   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.346459   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.346570   56231 provision.go:138] copyHostCerts
	I0213 23:30:17.346626   56231 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem, removing ...
	I0213 23:30:17.346645   56231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem
	I0213 23:30:17.346712   56231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/ca.pem (1078 bytes)
	I0213 23:30:17.346860   56231 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem, removing ...
	I0213 23:30:17.346876   56231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem
	I0213 23:30:17.346920   56231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/cert.pem (1123 bytes)
	I0213 23:30:17.346998   56231 exec_runner.go:144] found /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem, removing ...
	I0213 23:30:17.347006   56231 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem
	I0213 23:30:17.347029   56231 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18171-8990/.minikube/key.pem (1675 bytes)
	I0213 23:30:17.347090   56231 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem org=jenkins.kindnet-397221 san=[192.168.61.97 192.168.61.97 localhost 127.0.0.1 minikube kindnet-397221]
	I0213 23:30:17.422030   56231 provision.go:172] copyRemoteCerts
	I0213 23:30:17.422096   56231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0213 23:30:17.422118   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:17.424941   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.425428   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.425467   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.425649   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:17.425903   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.426067   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:17.426233   56231 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/kindnet-397221/id_rsa Username:docker}
	I0213 23:30:17.519489   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0213 23:30:17.546739   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0213 23:30:17.575751   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0213 23:30:17.603155   56231 provision.go:86] duration metric: configureAuth took 263.717441ms
	I0213 23:30:17.603184   56231 buildroot.go:189] setting minikube options for container-runtime
	I0213 23:30:17.603359   56231 config.go:182] Loaded profile config "kindnet-397221": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 23:30:17.603440   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:17.606535   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.606823   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.606849   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.607077   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:17.607300   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.607475   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.607645   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:17.607849   56231 main.go:141] libmachine: Using SSH client type: native
	I0213 23:30:17.608166   56231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0213 23:30:17.608182   56231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0213 23:30:17.925788   56231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0213 23:30:17.925825   56231 main.go:141] libmachine: Checking connection to Docker...
	I0213 23:30:17.925834   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetURL
	I0213 23:30:17.927180   56231 main.go:141] libmachine: (kindnet-397221) DBG | Using libvirt version 6000000
	I0213 23:30:17.929313   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.929697   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.929726   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.929938   56231 main.go:141] libmachine: Docker is up and running!
	I0213 23:30:17.929953   56231 main.go:141] libmachine: Reticulating splines...
	I0213 23:30:17.929960   56231 client.go:171] LocalClient.Create took 26.267569043s
	I0213 23:30:17.929979   56231 start.go:167] duration metric: libmachine.API.Create for "kindnet-397221" took 26.267622236s
	I0213 23:30:17.929988   56231 start.go:300] post-start starting for "kindnet-397221" (driver="kvm2")
	I0213 23:30:17.930001   56231 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0213 23:30:17.930016   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:17.930267   56231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0213 23:30:17.930299   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:17.932800   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.933162   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:17.933193   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:17.933337   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:17.933537   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:17.933687   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:17.933809   56231 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/kindnet-397221/id_rsa Username:docker}
	I0213 23:30:18.021705   56231 ssh_runner.go:195] Run: cat /etc/os-release
	I0213 23:30:18.027325   56231 info.go:137] Remote host: Buildroot 2021.02.12
	I0213 23:30:18.027359   56231 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/addons for local assets ...
	I0213 23:30:18.027441   56231 filesync.go:126] Scanning /home/jenkins/minikube-integration/18171-8990/.minikube/files for local assets ...
	I0213 23:30:18.027513   56231 filesync.go:149] local asset: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem -> 162002.pem in /etc/ssl/certs
	I0213 23:30:18.027598   56231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0213 23:30:18.038588   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:30:18.064264   56231 start.go:303] post-start completed in 134.264819ms
	I0213 23:30:18.064368   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetConfigRaw
	I0213 23:30:18.065120   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetIP
	I0213 23:30:18.068284   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.068641   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:18.068670   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.068954   56231 profile.go:148] Saving config to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/config.json ...
	I0213 23:30:18.069157   56231 start.go:128] duration metric: createHost completed in 26.425635081s
	I0213 23:30:18.069186   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:18.071547   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.071910   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:18.071942   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.072141   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:18.072333   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:18.072505   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:18.072680   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:18.072838   56231 main.go:141] libmachine: Using SSH client type: native
	I0213 23:30:18.073150   56231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a4a0] 0x80d180 <nil>  [] 0s} 192.168.61.97 22 <nil> <nil>}
	I0213 23:30:18.073163   56231 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0213 23:30:18.191648   56231 main.go:141] libmachine: SSH cmd err, output: <nil>: 1707867018.171280784
	
	I0213 23:30:18.191673   56231 fix.go:206] guest clock: 1707867018.171280784
	I0213 23:30:18.191683   56231 fix.go:219] Guest: 2024-02-13 23:30:18.171280784 +0000 UTC Remote: 2024-02-13 23:30:18.069171667 +0000 UTC m=+26.562688271 (delta=102.109117ms)
	I0213 23:30:18.191708   56231 fix.go:190] guest clock delta is within tolerance: 102.109117ms
	I0213 23:30:18.191715   56231 start.go:83] releasing machines lock for "kindnet-397221", held for 26.548299464s
	I0213 23:30:18.191748   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:18.192034   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetIP
	I0213 23:30:18.195335   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.195799   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:18.195830   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.196057   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:18.196683   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:18.196858   56231 main.go:141] libmachine: (kindnet-397221) Calling .DriverName
	I0213 23:30:18.196947   56231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0213 23:30:18.196987   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:18.197059   56231 ssh_runner.go:195] Run: cat /version.json
	I0213 23:30:18.197088   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHHostname
	I0213 23:30:18.199926   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.200229   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.200337   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:18.200364   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.200573   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:18.200620   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:18.200668   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:18.200726   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:18.200821   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHPort
	I0213 23:30:18.200872   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:18.200968   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHKeyPath
	I0213 23:30:18.201043   56231 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/kindnet-397221/id_rsa Username:docker}
	I0213 23:30:18.201126   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetSSHUsername
	I0213 23:30:18.201251   56231 sshutil.go:53] new ssh client: &{IP:192.168.61.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/kindnet-397221/id_rsa Username:docker}
	I0213 23:30:18.295313   56231 ssh_runner.go:195] Run: systemctl --version
	I0213 23:30:18.317363   56231 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0213 23:30:18.484488   56231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0213 23:30:18.490520   56231 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0213 23:30:18.490603   56231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0213 23:30:18.505936   56231 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0213 23:30:18.505970   56231 start.go:475] detecting cgroup driver to use...
	I0213 23:30:18.506044   56231 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0213 23:30:18.521005   56231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0213 23:30:18.534687   56231 docker.go:217] disabling cri-docker service (if available) ...
	I0213 23:30:18.534750   56231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0213 23:30:18.548930   56231 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0213 23:30:18.564665   56231 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0213 23:30:18.691095   56231 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0213 23:30:18.836769   56231 docker.go:233] disabling docker service ...
	I0213 23:30:18.836837   56231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0213 23:30:18.852825   56231 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0213 23:30:18.865024   56231 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0213 23:30:18.986469   56231 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0213 23:30:19.114963   56231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0213 23:30:19.130271   56231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0213 23:30:19.152187   56231 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0213 23:30:19.152244   56231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:30:19.162360   56231 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0213 23:30:19.162445   56231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:30:19.173416   56231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:30:19.184652   56231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0213 23:30:19.194621   56231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0213 23:30:19.205606   56231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0213 23:30:19.214921   56231 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0213 23:30:19.214989   56231 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0213 23:30:19.229161   56231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0213 23:30:19.240249   56231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0213 23:30:19.371840   56231 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0213 23:30:19.574114   56231 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0213 23:30:19.574192   56231 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0213 23:30:19.583527   56231 start.go:543] Will wait 60s for crictl version
	I0213 23:30:19.583590   56231 ssh_runner.go:195] Run: which crictl
	I0213 23:30:19.588373   56231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0213 23:30:19.632783   56231 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0213 23:30:19.632866   56231 ssh_runner.go:195] Run: crio --version
	I0213 23:30:19.689042   56231 ssh_runner.go:195] Run: crio --version
	I0213 23:30:19.747867   56231 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0213 23:30:17.232595   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:19.733701   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:19.749174   56231 main.go:141] libmachine: (kindnet-397221) Calling .GetIP
	I0213 23:30:19.752535   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:19.753037   56231 main.go:141] libmachine: (kindnet-397221) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:84:75", ip: ""} in network mk-kindnet-397221: {Iface:virbr2 ExpiryTime:2024-02-14 00:30:09 +0000 UTC Type:0 Mac:52:54:00:b2:84:75 Iaid: IPaddr:192.168.61.97 Prefix:24 Hostname:kindnet-397221 Clientid:01:52:54:00:b2:84:75}
	I0213 23:30:19.753072   56231 main.go:141] libmachine: (kindnet-397221) DBG | domain kindnet-397221 has defined IP address 192.168.61.97 and MAC address 52:54:00:b2:84:75 in network mk-kindnet-397221
	I0213 23:30:19.753309   56231 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0213 23:30:19.758238   56231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:30:19.773833   56231 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0213 23:30:19.773933   56231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:30:19.812555   56231 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0213 23:30:19.812635   56231 ssh_runner.go:195] Run: which lz4
	I0213 23:30:19.816764   56231 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0213 23:30:19.821100   56231 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0213 23:30:19.821139   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0213 23:30:20.112762   56875 main.go:141] libmachine: (calico-397221) Waiting to get IP...
	I0213 23:30:20.113824   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:20.114471   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:20.114498   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:20.114423   56949 retry.go:31] will retry after 255.906629ms: waiting for machine to come up
	I0213 23:30:20.372063   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:20.372775   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:20.372803   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:20.372738   56949 retry.go:31] will retry after 247.809563ms: waiting for machine to come up
	I0213 23:30:20.622443   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:20.623146   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:20.623173   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:20.623057   56949 retry.go:31] will retry after 433.632082ms: waiting for machine to come up
	I0213 23:30:21.058694   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:21.059190   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:21.059217   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:21.059159   56949 retry.go:31] will retry after 571.808912ms: waiting for machine to come up
	I0213 23:30:21.633036   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:21.633559   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:21.633586   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:21.633529   56949 retry.go:31] will retry after 559.354878ms: waiting for machine to come up
	I0213 23:30:22.194338   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:22.194867   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:22.194913   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:22.194820   56949 retry.go:31] will retry after 887.651399ms: waiting for machine to come up
	I0213 23:30:23.083923   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:23.084378   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:23.084401   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:23.084305   56949 retry.go:31] will retry after 943.358819ms: waiting for machine to come up
	I0213 23:30:24.028865   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:24.029446   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:24.029478   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:24.029391   56949 retry.go:31] will retry after 979.623424ms: waiting for machine to come up
	I0213 23:30:22.235028   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:24.480764   55355 pod_ready.go:102] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"False"
	I0213 23:30:21.812295   56231 crio.go:444] Took 1.995574 seconds to copy over tarball
	I0213 23:30:21.812375   56231 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0213 23:30:25.282792   56231 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.470385268s)
	I0213 23:30:25.282821   56231 crio.go:451] Took 3.470497 seconds to extract the tarball
	I0213 23:30:25.282833   56231 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0213 23:30:25.325942   56231 ssh_runner.go:195] Run: sudo crictl images --output json
	I0213 23:30:25.405256   56231 crio.go:496] all images are preloaded for cri-o runtime.
	I0213 23:30:25.405289   56231 cache_images.go:84] Images are preloaded, skipping loading
	I0213 23:30:25.405367   56231 ssh_runner.go:195] Run: crio config
	I0213 23:30:25.473551   56231 cni.go:84] Creating CNI manager for "kindnet"
	I0213 23:30:25.473602   56231 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0213 23:30:25.473631   56231 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.97 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-397221 NodeName:kindnet-397221 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0213 23:30:25.473808   56231 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-397221"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0213 23:30:25.473911   56231 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-397221 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kindnet-397221 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0213 23:30:25.473979   56231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0213 23:30:25.483801   56231 binaries.go:44] Found k8s binaries, skipping transfer
	I0213 23:30:25.483897   56231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0213 23:30:25.493810   56231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0213 23:30:25.512903   56231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0213 23:30:25.534577   56231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0213 23:30:25.553732   56231 ssh_runner.go:195] Run: grep 192.168.61.97	control-plane.minikube.internal$ /etc/hosts
	I0213 23:30:25.558216   56231 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0213 23:30:25.575360   56231 certs.go:56] Setting up /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221 for IP: 192.168.61.97
	I0213 23:30:25.575406   56231 certs.go:190] acquiring lock for shared ca certs: {Name:mk72e2f18645f03d2d562b727b46f3d16e754ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:25.575605   56231 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key
	I0213 23:30:25.575662   56231 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key
	I0213 23:30:25.575720   56231 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/client.key
	I0213 23:30:25.575750   56231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/client.crt with IP's: []
	I0213 23:30:25.709177   56231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/client.crt ...
	I0213 23:30:25.709219   56231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/client.crt: {Name:mk570d5139ee4dbf081ea31e87e25a1883eb8868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:25.709449   56231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/client.key ...
	I0213 23:30:25.709471   56231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/client.key: {Name:mk76de9aad84b718b3d9289030c9a82c2ce583a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:25.709599   56231 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.key.df67b971
	I0213 23:30:25.709617   56231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.crt.df67b971 with IP's: [192.168.61.97 10.96.0.1 127.0.0.1 10.0.0.1]
	I0213 23:30:25.844349   56231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.crt.df67b971 ...
	I0213 23:30:25.844382   56231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.crt.df67b971: {Name:mkea09dbd940e3a6707952db581519f1fbe7dc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:25.844584   56231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.key.df67b971 ...
	I0213 23:30:25.844605   56231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.key.df67b971: {Name:mk9ca5dcb8416249ee384fdb0bf885a5db8b22d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:25.844708   56231 certs.go:337] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.crt.df67b971 -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.crt
	I0213 23:30:25.844821   56231 certs.go:341] copying /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.key.df67b971 -> /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.key
	I0213 23:30:25.844901   56231 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.key
	I0213 23:30:25.844919   56231 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.crt with IP's: []
	I0213 23:30:26.061577   56231 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.crt ...
	I0213 23:30:26.061608   56231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.crt: {Name:mkb817b5fd26758155df597b1866393833708100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:26.061806   56231 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.key ...
	I0213 23:30:26.061827   56231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.key: {Name:mk5a433aa7507d577ca83d8ca6f4e8d3d52b7bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0213 23:30:26.062059   56231 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem (1338 bytes)
	W0213 23:30:26.062099   56231 certs.go:433] ignoring /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200_empty.pem, impossibly tiny 0 bytes
	I0213 23:30:26.062115   56231 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca-key.pem (1675 bytes)
	I0213 23:30:26.062150   56231 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/ca.pem (1078 bytes)
	I0213 23:30:26.062185   56231 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/cert.pem (1123 bytes)
	I0213 23:30:26.062223   56231 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/certs/home/jenkins/minikube-integration/18171-8990/.minikube/certs/key.pem (1675 bytes)
	I0213 23:30:26.062281   56231 certs.go:437] found cert: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem (1708 bytes)
	I0213 23:30:26.062887   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0213 23:30:26.089984   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0213 23:30:26.116227   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0213 23:30:26.142387   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/kindnet-397221/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0213 23:30:26.169766   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0213 23:30:26.200290   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0213 23:30:26.229813   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0213 23:30:26.257144   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0213 23:30:26.286304   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/ssl/certs/162002.pem --> /usr/share/ca-certificates/162002.pem (1708 bytes)
	I0213 23:30:26.317205   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0213 23:30:26.344425   56231 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18171-8990/.minikube/certs/16200.pem --> /usr/share/ca-certificates/16200.pem (1338 bytes)
	I0213 23:30:26.372922   56231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0213 23:30:26.392050   56231 ssh_runner.go:195] Run: openssl version
	I0213 23:30:26.399917   56231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0213 23:30:26.413564   56231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:30:26.418924   56231 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 13 21:57 /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:30:26.418982   56231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0213 23:30:26.425161   56231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0213 23:30:26.436441   56231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16200.pem && ln -fs /usr/share/ca-certificates/16200.pem /etc/ssl/certs/16200.pem"
	I0213 23:30:26.447911   56231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16200.pem
	I0213 23:30:26.453242   56231 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 13 22:05 /usr/share/ca-certificates/16200.pem
	I0213 23:30:26.453303   56231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16200.pem
	I0213 23:30:26.460025   56231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16200.pem /etc/ssl/certs/51391683.0"
	I0213 23:30:26.471325   56231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/162002.pem && ln -fs /usr/share/ca-certificates/162002.pem /etc/ssl/certs/162002.pem"
	I0213 23:30:26.482131   56231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/162002.pem
	I0213 23:30:26.487597   56231 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 13 22:05 /usr/share/ca-certificates/162002.pem
	I0213 23:30:26.487678   56231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/162002.pem
	I0213 23:30:26.494108   56231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/162002.pem /etc/ssl/certs/3ec20f2e.0"
	I0213 23:30:26.505386   56231 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0213 23:30:26.510169   56231 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0213 23:30:26.510225   56231 kubeadm.go:404] StartCluster: {Name:kindnet-397221 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:kindnet-397221 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.97 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 23:30:26.510293   56231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0213 23:30:26.510339   56231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0213 23:30:26.561393   56231 cri.go:89] found id: ""
	I0213 23:30:26.561455   56231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0213 23:30:26.572002   56231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0213 23:30:26.582658   56231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0213 23:30:26.592331   56231 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0213 23:30:26.592378   56231 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0213 23:30:26.650466   56231 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0213 23:30:26.650560   56231 kubeadm.go:322] [preflight] Running pre-flight checks
	I0213 23:30:26.833574   56231 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0213 23:30:26.833755   56231 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0213 23:30:26.833957   56231 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0213 23:30:27.127644   56231 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0213 23:30:26.731597   55355 pod_ready.go:92] pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace has status "Ready":"True"
	I0213 23:30:26.731625   55355 pod_ready.go:81] duration metric: took 38.008022807s waiting for pod "coredns-5dd5756b68-gstk7" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.731639   55355 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-p5th8" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.734880   55355 pod_ready.go:97] error getting pod "coredns-5dd5756b68-p5th8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-p5th8" not found
	I0213 23:30:26.734910   55355 pod_ready.go:81] duration metric: took 3.25754ms waiting for pod "coredns-5dd5756b68-p5th8" in "kube-system" namespace to be "Ready" ...
	E0213 23:30:26.734924   55355 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-p5th8" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-p5th8" not found
	I0213 23:30:26.734933   55355 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.742483   55355 pod_ready.go:92] pod "etcd-auto-397221" in "kube-system" namespace has status "Ready":"True"
	I0213 23:30:26.742513   55355 pod_ready.go:81] duration metric: took 7.569976ms waiting for pod "etcd-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.742527   55355 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.749774   55355 pod_ready.go:92] pod "kube-apiserver-auto-397221" in "kube-system" namespace has status "Ready":"True"
	I0213 23:30:26.749807   55355 pod_ready.go:81] duration metric: took 7.269789ms waiting for pod "kube-apiserver-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.749821   55355 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.757416   55355 pod_ready.go:92] pod "kube-controller-manager-auto-397221" in "kube-system" namespace has status "Ready":"True"
	I0213 23:30:26.757441   55355 pod_ready.go:81] duration metric: took 7.61226ms waiting for pod "kube-controller-manager-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.757452   55355 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-tlhkl" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.929053   55355 pod_ready.go:92] pod "kube-proxy-tlhkl" in "kube-system" namespace has status "Ready":"True"
	I0213 23:30:26.929077   55355 pod_ready.go:81] duration metric: took 171.61997ms waiting for pod "kube-proxy-tlhkl" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:26.929088   55355 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:27.328578   55355 pod_ready.go:92] pod "kube-scheduler-auto-397221" in "kube-system" namespace has status "Ready":"True"
	I0213 23:30:27.328603   55355 pod_ready.go:81] duration metric: took 399.5091ms waiting for pod "kube-scheduler-auto-397221" in "kube-system" namespace to be "Ready" ...
	I0213 23:30:27.328616   55355 pod_ready.go:38] duration metric: took 38.634334654s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0213 23:30:27.328633   55355 api_server.go:52] waiting for apiserver process to appear ...
	I0213 23:30:27.328699   55355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 23:30:27.345293   55355 api_server.go:72] duration metric: took 40.642282533s to wait for apiserver process to appear ...
	I0213 23:30:27.345322   55355 api_server.go:88] waiting for apiserver healthz status ...
	I0213 23:30:27.345347   55355 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I0213 23:30:27.351471   55355 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I0213 23:30:27.353084   55355 api_server.go:141] control plane version: v1.28.4
	I0213 23:30:27.353111   55355 api_server.go:131] duration metric: took 7.781502ms to wait for apiserver health ...
	I0213 23:30:27.353122   55355 system_pods.go:43] waiting for kube-system pods to appear ...
	I0213 23:30:27.533455   55355 system_pods.go:59] 7 kube-system pods found
	I0213 23:30:27.533490   55355 system_pods.go:61] "coredns-5dd5756b68-gstk7" [f9cf7d89-7516-4fa0-afd1-cc7bd310254d] Running
	I0213 23:30:27.533497   55355 system_pods.go:61] "etcd-auto-397221" [966ee21b-8ceb-4a67-a812-2eb739db5b5c] Running
	I0213 23:30:27.533503   55355 system_pods.go:61] "kube-apiserver-auto-397221" [e81a3b9a-f00a-4fe1-8e77-2fcdce4b2c36] Running
	I0213 23:30:27.533509   55355 system_pods.go:61] "kube-controller-manager-auto-397221" [43fc86a7-7933-4f57-8cd0-a44f603583b3] Running
	I0213 23:30:27.533514   55355 system_pods.go:61] "kube-proxy-tlhkl" [5cc9148d-6b22-4f04-ad3a-98ca9750689c] Running
	I0213 23:30:27.533521   55355 system_pods.go:61] "kube-scheduler-auto-397221" [2796e7c1-05bf-465b-8a70-90920c85bbf4] Running
	I0213 23:30:27.533528   55355 system_pods.go:61] "storage-provisioner" [4354478a-8629-4c4f-996b-bc1e8c891dab] Running
	I0213 23:30:27.533535   55355 system_pods.go:74] duration metric: took 180.407153ms to wait for pod list to return data ...
	I0213 23:30:27.533546   55355 default_sa.go:34] waiting for default service account to be created ...
	I0213 23:30:27.728681   55355 default_sa.go:45] found service account: "default"
	I0213 23:30:27.728710   55355 default_sa.go:55] duration metric: took 195.156933ms for default service account to be created ...
	I0213 23:30:27.728721   55355 system_pods.go:116] waiting for k8s-apps to be running ...
	I0213 23:30:27.932937   55355 system_pods.go:86] 7 kube-system pods found
	I0213 23:30:27.932974   55355 system_pods.go:89] "coredns-5dd5756b68-gstk7" [f9cf7d89-7516-4fa0-afd1-cc7bd310254d] Running
	I0213 23:30:27.932982   55355 system_pods.go:89] "etcd-auto-397221" [966ee21b-8ceb-4a67-a812-2eb739db5b5c] Running
	I0213 23:30:27.932989   55355 system_pods.go:89] "kube-apiserver-auto-397221" [e81a3b9a-f00a-4fe1-8e77-2fcdce4b2c36] Running
	I0213 23:30:27.932996   55355 system_pods.go:89] "kube-controller-manager-auto-397221" [43fc86a7-7933-4f57-8cd0-a44f603583b3] Running
	I0213 23:30:27.933002   55355 system_pods.go:89] "kube-proxy-tlhkl" [5cc9148d-6b22-4f04-ad3a-98ca9750689c] Running
	I0213 23:30:27.933008   55355 system_pods.go:89] "kube-scheduler-auto-397221" [2796e7c1-05bf-465b-8a70-90920c85bbf4] Running
	I0213 23:30:27.933014   55355 system_pods.go:89] "storage-provisioner" [4354478a-8629-4c4f-996b-bc1e8c891dab] Running
	I0213 23:30:27.933023   55355 system_pods.go:126] duration metric: took 204.296503ms to wait for k8s-apps to be running ...
	I0213 23:30:27.933038   55355 system_svc.go:44] waiting for kubelet service to be running ....
	I0213 23:30:27.933092   55355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 23:30:27.947659   55355 system_svc.go:56] duration metric: took 14.606152ms WaitForService to wait for kubelet.
	I0213 23:30:27.947694   55355 kubeadm.go:581] duration metric: took 41.244691053s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0213 23:30:27.947725   55355 node_conditions.go:102] verifying NodePressure condition ...
	I0213 23:30:28.128225   55355 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0213 23:30:28.128263   55355 node_conditions.go:123] node cpu capacity is 2
	I0213 23:30:28.128278   55355 node_conditions.go:105] duration metric: took 180.548314ms to run NodePressure ...
	I0213 23:30:28.128293   55355 start.go:228] waiting for startup goroutines ...
	I0213 23:30:28.128302   55355 start.go:233] waiting for cluster config update ...
	I0213 23:30:28.128315   55355 start.go:242] writing updated cluster config ...
	I0213 23:30:28.128649   55355 ssh_runner.go:195] Run: rm -f paused
	I0213 23:30:28.178818   55355 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0213 23:30:28.181919   55355 out.go:177] * Done! kubectl is now configured to use "auto-397221" cluster and "default" namespace by default
	I0213 23:30:25.010259   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:25.010664   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:25.010700   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:25.010607   56949 retry.go:31] will retry after 1.23030789s: waiting for machine to come up
	I0213 23:30:26.243039   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:26.243573   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:26.243606   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:26.243512   56949 retry.go:31] will retry after 1.460755121s: waiting for machine to come up
	I0213 23:30:27.706279   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:27.706793   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:27.706821   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:27.706736   56949 retry.go:31] will retry after 1.833156762s: waiting for machine to come up
	I0213 23:30:27.129556   56231 out.go:204]   - Generating certificates and keys ...
	I0213 23:30:27.129670   56231 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0213 23:30:27.129764   56231 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0213 23:30:27.486586   56231 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0213 23:30:27.560823   56231 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0213 23:30:27.650845   56231 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0213 23:30:27.930288   56231 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0213 23:30:28.122001   56231 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0213 23:30:28.122237   56231 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-397221 localhost] and IPs [192.168.61.97 127.0.0.1 ::1]
	I0213 23:30:28.393172   56231 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0213 23:30:28.393529   56231 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-397221 localhost] and IPs [192.168.61.97 127.0.0.1 ::1]
	I0213 23:30:28.528785   56231 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0213 23:30:28.594294   56231 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0213 23:30:28.876857   56231 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0213 23:30:28.877119   56231 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0213 23:30:29.112990   56231 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0213 23:30:29.415000   56231 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0213 23:30:29.513283   56231 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0213 23:30:29.854675   56231 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0213 23:30:29.857005   56231 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0213 23:30:29.862414   56231 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0213 23:30:29.863890   56231 out.go:204]   - Booting up control plane ...
	I0213 23:30:29.863994   56231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0213 23:30:29.864143   56231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0213 23:30:29.864720   56231 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0213 23:30:29.882748   56231 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0213 23:30:29.883609   56231 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0213 23:30:29.883683   56231 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0213 23:30:30.035605   56231 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0213 23:30:29.541400   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:29.541934   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:29.541963   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:29.541884   56949 retry.go:31] will retry after 2.788261739s: waiting for machine to come up
	I0213 23:30:32.332364   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:32.332731   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:32.332761   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:32.332686   56949 retry.go:31] will retry after 3.815524076s: waiting for machine to come up
	I0213 23:30:36.150851   56875 main.go:141] libmachine: (calico-397221) DBG | domain calico-397221 has defined MAC address 52:54:00:3e:1f:e4 in network mk-calico-397221
	I0213 23:30:36.151282   56875 main.go:141] libmachine: (calico-397221) DBG | unable to find current IP address of domain calico-397221 in network mk-calico-397221
	I0213 23:30:36.151314   56875 main.go:141] libmachine: (calico-397221) DBG | I0213 23:30:36.151230   56949 retry.go:31] will retry after 4.542758816s: waiting for machine to come up
	I0213 23:30:39.037616   56231 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003997 seconds
	I0213 23:30:39.037719   56231 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0213 23:30:39.059205   56231 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0213 23:30:39.589798   56231 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0213 23:30:39.590105   56231 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-397221 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0213 23:30:40.103587   56231 kubeadm.go:322] [bootstrap-token] Using token: 1khnxo.6nnjtcvjjt48bj8n
	I0213 23:30:40.104980   56231 out.go:204]   - Configuring RBAC rules ...
	I0213 23:30:40.105119   56231 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0213 23:30:40.115314   56231 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0213 23:30:40.123635   56231 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0213 23:30:40.128788   56231 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0213 23:30:40.135093   56231 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0213 23:30:40.138965   56231 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0213 23:30:40.157384   56231 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0213 23:30:40.417842   56231 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0213 23:30:40.530193   56231 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0213 23:30:40.530215   56231 kubeadm.go:322] 
	I0213 23:30:40.530291   56231 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0213 23:30:40.530304   56231 kubeadm.go:322] 
	I0213 23:30:40.530414   56231 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0213 23:30:40.530425   56231 kubeadm.go:322] 
	I0213 23:30:40.530459   56231 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0213 23:30:40.530540   56231 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0213 23:30:40.530643   56231 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0213 23:30:40.530657   56231 kubeadm.go:322] 
	I0213 23:30:40.530725   56231 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0213 23:30:40.530735   56231 kubeadm.go:322] 
	I0213 23:30:40.530820   56231 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0213 23:30:40.530830   56231 kubeadm.go:322] 
	I0213 23:30:40.530899   56231 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0213 23:30:40.531020   56231 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0213 23:30:40.531131   56231 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0213 23:30:40.531161   56231 kubeadm.go:322] 
	I0213 23:30:40.531281   56231 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0213 23:30:40.531379   56231 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0213 23:30:40.531396   56231 kubeadm.go:322] 
	I0213 23:30:40.531513   56231 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1khnxo.6nnjtcvjjt48bj8n \
	I0213 23:30:40.531651   56231 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 \
	I0213 23:30:40.531684   56231 kubeadm.go:322] 	--control-plane 
	I0213 23:30:40.531693   56231 kubeadm.go:322] 
	I0213 23:30:40.531822   56231 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0213 23:30:40.531835   56231 kubeadm.go:322] 
	I0213 23:30:40.531981   56231 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1khnxo.6nnjtcvjjt48bj8n \
	I0213 23:30:40.532130   56231 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ce5c50754a3e3593e8fc93a1073f57faca84c5b256f9049940024df629331f28 
	I0213 23:30:40.532496   56231 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0213 23:30:40.532516   56231 cni.go:84] Creating CNI manager for "kindnet"
	I0213 23:30:40.534044   56231 out.go:177] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-02-13 23:08:41 UTC, ends at Tue 2024-02-13 23:30:41 UTC. --
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.586194054Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0ba6e2d21af41cf940e8cc55656b2c2e0e87578b90d8ccad36dff47b82dc4b6f,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-rkg49,Uid:d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866053115383879,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-rkg49,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:14:12.779072630Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bba2cb47-d726-4852-a704-b315
daa0f646,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866053025754953,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-02-13T23:14:12.685272692Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-zfscd,Uid:98a75f73-94a2-4566-9b70-74d5ed759628,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866051499185816,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:14:10.259896941Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&PodSandboxMetadata{Name:kube-proxy-kvz2b,Uid:54f06cac-d864-49
cc-a00f-803d6f6333a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866050090674408,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-02-13T23:14:09.154585755Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-083863,Uid:0bb39490c27d374b8ab8fefcb4d8c22f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866028656217470,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0bb39490c27d374b8ab8fefcb4d8c22f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bb39490c27d374b8ab8fefcb4d8c22f,kubernetes.io/config.seen: 2024-02-13T23:13:48.116540668Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-083863,Uid:94c2ed21fc112a76f4643dc4b7bf8de1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866028642629831,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.3:8444,kubernetes.io/config.hash: 94c2ed21fc112a76f4643dc4b7bf8de1,kubernetes.io/config.seen: 2024-02-13T
23:13:48.116531661Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-083863,Uid:3b2f08a7709ca5f23cadd717472e3823,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866028631066962,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3b2f08a7709ca5f23cadd717472e3823,kubernetes.io/config.seen: 2024-02-13T23:13:48.116539482Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-083863,Uid:f2fd4c93d6f077977d
66ef8619e49fac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1707866028587221197,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977d66ef8619e49fac,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.3:2379,kubernetes.io/config.hash: f2fd4c93d6f077977d66ef8619e49fac,kubernetes.io/config.seen: 2024-02-13T23:13:48.116527570Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=9d593a27-415f-4280-9247-0a171285398c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.587017779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9adaa02f-138d-4f3a-b8e4-d5bc7135c18a name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.587070290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9adaa02f-138d-4f3a-b8e4-d5bc7135c18a name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.587316601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9adaa02f-138d-4f3a-b8e4-d5bc7135c18a name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.618222086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a22135d5-4bb5-4115-ba18-1a727f044d87 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.618293540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a22135d5-4bb5-4115-ba18-1a727f044d87 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.620337633Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=398adea9-e1d4-43c9-8205-47334ae05149 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.621292144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707867041621271808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=398adea9-e1d4-43c9-8205-47334ae05149 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.622253986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=846d9972-862a-4bf5-a7b7-966ba5358c39 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.622301824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=846d9972-862a-4bf5-a7b7-966ba5358c39 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.622492561Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=846d9972-862a-4bf5-a7b7-966ba5358c39 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.671457521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3a102105-b666-4df4-9fab-eb7bbd1546d6 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.671632160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3a102105-b666-4df4-9fab-eb7bbd1546d6 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.674824492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5b0bee31-62f5-4a97-aad2-a132c284b72f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.675197206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707867041675183483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5b0bee31-62f5-4a97-aad2-a132c284b72f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.676319204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aab943d8-ab4a-42ec-869b-7a0350a2337c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.676367954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aab943d8-ab4a-42ec-869b-7a0350a2337c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.676516045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aab943d8-ab4a-42ec-869b-7a0350a2337c name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.728143967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5431e1f1-8043-4059-b07d-6be71af31018 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.728214956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5431e1f1-8043-4059-b07d-6be71af31018 name=/runtime.v1.RuntimeService/Version
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.730253902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ecc49311-6e71-4944-97b0-bb16693eefcb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.731219580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1707867041731190920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ecc49311-6e71-4944-97b0-bb16693eefcb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.732215000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0cbdf8a5-9c59-4fa9-b7f3-0765fa753428 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.732290303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0cbdf8a5-9c59-4fa9-b7f3-0765fa753428 name=/runtime.v1.RuntimeService/ListContainers
	Feb 13 23:30:41 default-k8s-diff-port-083863 crio[717]: time="2024-02-13 23:30:41.732678100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17,PodSandboxId:b22d5433d1db1a4259d234b3b0e38168313e2d63081ad0482f4d00d242c95d53,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1707866054092911681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba2cb47-d726-4852-a704-b315daa0f646,},Annotations:map[string]string{io.kubernetes.container.hash: bfff1a92,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72,PodSandboxId:b4980c423eefc1f03a130b36109adf94dd639d79a1d533ce0017f0707b9890b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1707866053392181187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-zfscd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98a75f73-94a2-4566-9b70-74d5ed759628,},Annotations:map[string]string{io.kubernetes.container.hash: d694f261,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca,PodSandboxId:2cfaff2ab39662d5201d09491d25725e534b4623740649e9b0e808d95940c1b8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1707866051191263563,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kvz2b,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 54f06cac-d864-49cc-a00f-803d6f6333a3,},Annotations:map[string]string{io.kubernetes.container.hash: 79b10a60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65,PodSandboxId:1301d0fa68e32e69d0405d7f6cc8e67c10466d51bd0ab4d34aeba64536ba2126,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1707866029748567103,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fd4c93d6f077977
d66ef8619e49fac,},Annotations:map[string]string{io.kubernetes.container.hash: 1db27cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647,PodSandboxId:16023d4c404e4d48d400ee9e5698d9a149403b097a08667c29c01178b9730305,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1707866029711449123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3b2f08a7709ca5f23cadd717472e3823,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae,PodSandboxId:972046b89c6b49ac0245fd86c3f754351c04d2c179a7f105ad422cb88e6c2440,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1707866029494404131,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 0bb39490c27d374b8ab8fefcb4d8c22f,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6,PodSandboxId:dcde9a0e0d2f5c2e38d2bb6fdf8dbcaa580f3b198b448b3ad4716f33280a87de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1707866029300621093,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-083863,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 94c2ed21fc112a76f4643dc4b7bf8de1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d644768,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0cbdf8a5-9c59-4fa9-b7f3-0765fa753428 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b77bb1054c124       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   b22d5433d1db1       storage-provisioner
	54c4e3487b37a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   b4980c423eefc       coredns-5dd5756b68-zfscd
	cf87943bc8d36       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   2cfaff2ab3966       kube-proxy-kvz2b
	d21b5c6916454       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   1301d0fa68e32       etcd-default-k8s-diff-port-083863
	090e6a31f6e25       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   16023d4c404e4       kube-controller-manager-default-k8s-diff-port-083863
	5b9dcc8f5592c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   972046b89c6b4       kube-scheduler-default-k8s-diff-port-083863
	fab70becf45b1       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   dcde9a0e0d2f5       kube-apiserver-default-k8s-diff-port-083863
	
	
	==> coredns [54c4e3487b37ac9b0f09494984eeb2b1b2ae51e07d8351d2c330d84df5a2ad72] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36814 - 44383 "HINFO IN 6798519642253464597.6305308384375373136. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.10547697s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-083863
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-083863
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=613caefe13c19c397229c748a081b93da0bf2e2e
	                    minikube.k8s.io/name=default-k8s-diff-port-083863
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_13T23_13_57_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 13 Feb 2024 23:13:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-083863
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 13 Feb 2024 23:30:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 13 Feb 2024 23:29:36 +0000   Tue, 13 Feb 2024 23:13:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 13 Feb 2024 23:29:36 +0000   Tue, 13 Feb 2024 23:13:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 13 Feb 2024 23:29:36 +0000   Tue, 13 Feb 2024 23:13:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 13 Feb 2024 23:29:36 +0000   Tue, 13 Feb 2024 23:14:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    default-k8s-diff-port-083863
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c35be514b0749f1bb646e6d331bddbd
	  System UUID:                2c35be51-4b07-49f1-bb64-6e6d331bddbd
	  Boot ID:                    6517d7fc-ffdb-4ab9-a6ee-ce0bf8e78a15
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-zfscd                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-083863                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-083863             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-083863    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-kvz2b                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-083863             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-rkg49                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-083863 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-083863 event: Registered Node default-k8s-diff-port-083863 in Controller
	
	
	==> dmesg <==
	[Feb13 23:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069635] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.579001] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.542564] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.145047] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.504965] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.355292] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.142249] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.257558] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.132249] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.303438] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[Feb13 23:09] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[ +19.082845] kauditd_printk_skb: 29 callbacks suppressed
	[Feb13 23:13] systemd-fstab-generator[3487]: Ignoring "noauto" for root device
	[  +9.785148] systemd-fstab-generator[3810]: Ignoring "noauto" for root device
	[Feb13 23:14] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [d21b5c69164544d66d8224a8afe3ea2f988f95e9005a6721c85e2a0554fc0a65] <==
	{"level":"info","ts":"2024-02-13T23:13:52.296053Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d030e9334923ef1","local-member-id":"ac0ce77fb984259c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.296151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.296175Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-13T23:13:52.29621Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:52.29622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-13T23:13:52.296228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-13T23:13:52.297235Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-13T23:13:52.297881Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.3:2379"}
	{"level":"info","ts":"2024-02-13T23:23:52.33263Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717}
	{"level":"info","ts":"2024-02-13T23:23:52.335993Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":717,"took":"2.535604ms","hash":572379158}
	{"level":"info","ts":"2024-02-13T23:23:52.336093Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":572379158,"revision":717,"compact-revision":-1}
	{"level":"info","ts":"2024-02-13T23:28:35.756505Z","caller":"traceutil/trace.go:171","msg":"trace[886005640] transaction","detail":"{read_only:false; response_revision:1189; number_of_response:1; }","duration":"164.625652ms","start":"2024-02-13T23:28:35.591813Z","end":"2024-02-13T23:28:35.756439Z","steps":["trace[886005640] 'process raft request'  (duration: 164.180925ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:28:36.071044Z","caller":"traceutil/trace.go:171","msg":"trace[1145470244] transaction","detail":"{read_only:false; response_revision:1190; number_of_response:1; }","duration":"126.579632ms","start":"2024-02-13T23:28:35.944422Z","end":"2024-02-13T23:28:36.071002Z","steps":["trace[1145470244] 'process raft request'  (duration: 58.353152ms)","trace[1145470244] 'compare'  (duration: 68.085543ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-13T23:28:52.341804Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":960}
	{"level":"info","ts":"2024-02-13T23:28:52.343672Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":960,"took":"1.540148ms","hash":532869672}
	{"level":"info","ts":"2024-02-13T23:28:52.343859Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":532869672,"revision":960,"compact-revision":717}
	{"level":"info","ts":"2024-02-13T23:29:18.174546Z","caller":"traceutil/trace.go:171","msg":"trace[675270375] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"122.205632ms","start":"2024-02-13T23:29:18.05231Z","end":"2024-02-13T23:29:18.174516Z","steps":["trace[675270375] 'process raft request'  (duration: 122.075432ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:30:22.828413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.769673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T23:30:22.828914Z","caller":"traceutil/trace.go:171","msg":"trace[1287774098] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1278; }","duration":"112.412247ms","start":"2024-02-13T23:30:22.716446Z","end":"2024-02-13T23:30:22.828858Z","steps":["trace[1287774098] 'range keys from in-memory index tree'  (duration: 111.619469ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:30:24.937971Z","caller":"traceutil/trace.go:171","msg":"trace[700890908] transaction","detail":"{read_only:false; response_revision:1279; number_of_response:1; }","duration":"224.65345ms","start":"2024-02-13T23:30:24.71328Z","end":"2024-02-13T23:30:24.937933Z","steps":["trace[700890908] 'process raft request'  (duration: 224.350991ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-13T23:30:24.93858Z","caller":"traceutil/trace.go:171","msg":"trace[437892437] linearizableReadLoop","detail":"{readStateIndex:1495; appliedIndex:1495; }","duration":"220.539091ms","start":"2024-02-13T23:30:24.71801Z","end":"2024-02-13T23:30:24.938549Z","steps":["trace[437892437] 'read index received'  (duration: 220.5325ms)","trace[437892437] 'applied index is now lower than readState.Index'  (duration: 3.385µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-13T23:30:24.938968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.932931ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-13T23:30:24.939032Z","caller":"traceutil/trace.go:171","msg":"trace[320667819] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1279; }","duration":"221.026214ms","start":"2024-02-13T23:30:24.717974Z","end":"2024-02-13T23:30:24.939Z","steps":["trace[320667819] 'agreement among raft nodes before linearized reading'  (duration: 220.687389ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-13T23:30:25.160677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.265254ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2710196814497502378 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3bhq3q26v7jhftsbddkrqir7fu\" mod_revision:1271 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3bhq3q26v7jhftsbddkrqir7fu\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3bhq3q26v7jhftsbddkrqir7fu\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-02-13T23:30:25.161381Z","caller":"traceutil/trace.go:171","msg":"trace[923480039] transaction","detail":"{read_only:false; response_revision:1280; number_of_response:1; }","duration":"258.07876ms","start":"2024-02-13T23:30:24.903278Z","end":"2024-02-13T23:30:25.161357Z","steps":["trace[923480039] 'process raft request'  (duration: 118.671365ms)","trace[923480039] 'compare'  (duration: 137.103675ms)"],"step_count":2}
	
	
	==> kernel <==
	 23:30:42 up 22 min,  0 users,  load average: 0.26, 0.19, 0.18
	Linux default-k8s-diff-port-083863 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fab70becf45b123294d99d53dbfc86e6a0ad4e2068f18cdf6cdcf1074f0c26d6] <==
	I0213 23:26:55.096269       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0213 23:26:55.096366       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:26:55.098117       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:27:53.961818       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0213 23:28:53.961616       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:28:54.099870       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:54.100141       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:28:54.100652       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:28:55.100311       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:55.100451       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0213 23:28:55.100311       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:28:55.100596       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:28:55.100483       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:28:55.102663       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0213 23:29:53.961344       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0213 23:29:55.100879       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:29:55.100957       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0213 23:29:55.100968       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0213 23:29:55.103439       1 handler_proxy.go:93] no RequestInfo found in the context
	E0213 23:29:55.103519       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0213 23:29:55.103547       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [090e6a31f6e258675e6724f3562b661bec8fc683c4a3f9c19aaa0b573c0c7647] <==
	I0213 23:25:31.605969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="174.7µs"
	E0213 23:25:39.406174       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:25:39.950842       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:09.412330       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:09.960978       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:26:39.420323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:26:39.971401       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:09.428329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:09.985003       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:27:39.434885       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:27:39.997058       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:28:09.440549       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:28:10.012633       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:28:39.448227       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:28:40.024437       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:29:09.455853       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:29:10.038583       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:29:39.462577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:29:40.049499       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0213 23:30:09.469949       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:30:10.058567       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0213 23:30:20.629625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="390.984µs"
	I0213 23:30:32.610975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="575.506µs"
	E0213 23:30:39.475931       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0213 23:30:40.068132       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [cf87943bc8d36d33044403cdace8318ebdbd42df8cf8e4f43fcd66ad6f6415ca] <==
	I0213 23:14:13.222665       1 server_others.go:69] "Using iptables proxy"
	I0213 23:14:13.273844       1 node.go:141] Successfully retrieved node IP: 192.168.39.3
	I0213 23:14:13.669997       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0213 23:14:13.670069       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0213 23:14:13.685667       1 server_others.go:152] "Using iptables Proxier"
	I0213 23:14:13.685911       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0213 23:14:13.686146       1 server.go:846] "Version info" version="v1.28.4"
	I0213 23:14:13.690053       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0213 23:14:13.740210       1 config.go:188] "Starting service config controller"
	I0213 23:14:13.741095       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0213 23:14:13.741379       1 config.go:97] "Starting endpoint slice config controller"
	I0213 23:14:13.741422       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0213 23:14:13.746847       1 config.go:315] "Starting node config controller"
	I0213 23:14:13.746987       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0213 23:14:13.842162       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0213 23:14:13.842265       1 shared_informer.go:318] Caches are synced for service config
	I0213 23:14:13.848059       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5b9dcc8f5592c003c82310c45720cde3e57603185eb90d6d4132152e1c6990ae] <==
	W0213 23:13:54.119403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:13:54.121508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 23:13:54.121676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:13:54.121906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:13:55.080350       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0213 23:13:55.080466       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0213 23:13:55.136102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0213 23:13:55.136258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0213 23:13:55.204944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0213 23:13:55.205096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0213 23:13:55.260382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0213 23:13:55.260506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0213 23:13:55.308678       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0213 23:13:55.308881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0213 23:13:55.347963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:55.348090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:55.386911       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0213 23:13:55.387074       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0213 23:13:55.413371       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0213 23:13:55.413467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0213 23:13:55.422835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0213 23:13:55.422934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0213 23:13:55.487683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0213 23:13:55.487890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0213 23:13:58.204921       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-02-13 23:08:41 UTC, ends at Tue 2024-02-13 23:30:42 UTC. --
	Feb 13 23:27:57 default-k8s-diff-port-083863 kubelet[3817]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:28:11 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:28:11.586810    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:28:26 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:28:26.586182    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:28:41 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:28:41.586151    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:28:52 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:28:52.586128    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:28:57 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:28:57.660004    3817 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:28:57 default-k8s-diff-port-083863 kubelet[3817]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:28:57 default-k8s-diff-port-083863 kubelet[3817]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:28:57 default-k8s-diff-port-083863 kubelet[3817]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:28:57 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:28:57.694463    3817 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Feb 13 23:29:04 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:29:04.585828    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:29:15 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:29:15.588681    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:29:28 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:29:28.586267    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:29:39 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:29:39.586884    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:29:54 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:29:54.587154    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:29:57 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:29:57.661622    3817 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 13 23:29:57 default-k8s-diff-port-083863 kubelet[3817]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 13 23:29:57 default-k8s-diff-port-083863 kubelet[3817]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 13 23:29:57 default-k8s-diff-port-083863 kubelet[3817]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 13 23:30:06 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:30:06.600039    3817 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 13 23:30:06 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:30:06.600092    3817 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Feb 13 23:30:06 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:30:06.600349    3817 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-85xnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rkg49_kube-system(d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Feb 13 23:30:06 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:30:06.600392    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:30:20 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:30:20.587980    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	Feb 13 23:30:32 default-k8s-diff-port-083863 kubelet[3817]: E0213 23:30:32.586988    3817 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rkg49" podUID="d96bdaf9-c0bd-47e7-a29a-bc623ceeb64b"
	
	
	==> storage-provisioner [b77bb1054c1245fcce8fe8ab24712437d65a30bee9c67099a286419f5b5b1f17] <==
	I0213 23:14:14.264840       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0213 23:14:14.278961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0213 23:14:14.279188       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0213 23:14:14.290391       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0213 23:14:14.291469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-083863_3a599a68-9463-48a2-bf21-45b5bf078484!
	I0213 23:14:14.293237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a94c010d-1957-412e-af00-3b0a657acaf6", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-083863_3a599a68-9463-48a2-bf21-45b5bf078484 became leader
	I0213 23:14:14.392120       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-083863_3a599a68-9463-48a2-bf21-45b5bf078484!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rkg49
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 describe pod metrics-server-57f55c9bc5-rkg49
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-083863 describe pod metrics-server-57f55c9bc5-rkg49: exit status 1 (65.732876ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rkg49" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-083863 describe pod metrics-server-57f55c9bc5-rkg49: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (169.27s)
E0213 23:33:01.905220   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory

                                                
                                    

Test pass (247/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.77
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.15
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 4.79
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.49
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 130.41
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 155.81
38 TestAddons/parallel/Registry 15.95
40 TestAddons/parallel/InspektorGadget 12.34
41 TestAddons/parallel/MetricsServer 7.02
42 TestAddons/parallel/HelmTiller 20.97
44 TestAddons/parallel/CSI 70.96
45 TestAddons/parallel/Headlamp 18.24
47 TestAddons/parallel/LocalPath 59.6
48 TestAddons/parallel/NvidiaDevicePlugin 6.71
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
54 TestCertOptions 87.78
55 TestCertExpiration 308.79
57 TestForceSystemdFlag 80.98
58 TestForceSystemdEnv 50.44
60 TestKVMDriverInstallOrUpdate 1.48
64 TestErrorSpam/setup 47.31
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.79
67 TestErrorSpam/pause 1.61
68 TestErrorSpam/unpause 1.8
69 TestErrorSpam/stop 2.27
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 100.12
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 39.35
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
81 TestFunctional/serial/CacheCmd/cache/add_local 1.41
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 37.48
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.54
92 TestFunctional/serial/LogsFileCmd 1.61
93 TestFunctional/serial/InvalidService 4.41
95 TestFunctional/parallel/ConfigCmd 0.47
96 TestFunctional/parallel/DashboardCmd 20.91
97 TestFunctional/parallel/DryRun 0.33
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.03
103 TestFunctional/parallel/ServiceCmdConnect 8.31
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 45.05
107 TestFunctional/parallel/SSHCmd 0.5
108 TestFunctional/parallel/CpCmd 1.55
109 TestFunctional/parallel/MySQL 26.92
110 TestFunctional/parallel/FileSync 0.25
111 TestFunctional/parallel/CertSync 1.65
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
119 TestFunctional/parallel/License 0.22
120 TestFunctional/parallel/ServiceCmd/DeployApp 15.55
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
122 TestFunctional/parallel/ProfileCmd/profile_list 0.32
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.66
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
139 TestFunctional/parallel/ImageCommands/ImageBuild 3
140 TestFunctional/parallel/ImageCommands/Setup 0.97
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.78
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.97
144 TestFunctional/parallel/ServiceCmd/List 0.48
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
147 TestFunctional/parallel/ServiceCmd/Format 0.6
148 TestFunctional/parallel/ServiceCmd/URL 0.46
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
152 TestFunctional/parallel/MountCmd/any-port 23.93
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.52
154 TestFunctional/parallel/ImageCommands/ImageRemove 1.64
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.37
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.55
157 TestFunctional/parallel/MountCmd/specific-port 1.78
158 TestFunctional/parallel/MountCmd/VerifyCleanup 0.8
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestIngressAddonLegacy/StartLegacyK8sCluster 107.15
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.46
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
172 TestJSONOutput/start/Command 90.09
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.68
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.69
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.1
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.22
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 99.06
204 TestMountStart/serial/StartWithMountFirst 27.39
205 TestMountStart/serial/VerifyMountFirst 0.42
206 TestMountStart/serial/StartWithMountSecond 30.07
207 TestMountStart/serial/VerifyMountSecond 0.4
208 TestMountStart/serial/DeleteFirst 0.69
209 TestMountStart/serial/VerifyMountPostDelete 0.4
210 TestMountStart/serial/Stop 1.2
211 TestMountStart/serial/RestartStopped 23.38
212 TestMountStart/serial/VerifyMountPostStop 0.42
215 TestMultiNode/serial/FreshStart2Nodes 111.5
216 TestMultiNode/serial/DeployApp2Nodes 5.7
217 TestMultiNode/serial/PingHostFrom2Pods 0.98
218 TestMultiNode/serial/AddNode 43.14
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.22
221 TestMultiNode/serial/CopyFile 7.97
222 TestMultiNode/serial/StopNode 3.03
223 TestMultiNode/serial/StartAfterStop 29.58
225 TestMultiNode/serial/DeleteNode 1.59
227 TestMultiNode/serial/RestartMultiNode 447.52
228 TestMultiNode/serial/ValidateNameConflict 50.15
235 TestScheduledStopUnix 120.37
239 TestRunningBinaryUpgrade 213.15
241 TestKubernetesUpgrade 259.54
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 101.5
246 TestNoKubernetes/serial/StartWithStopK8s 37.19
247 TestStoppedBinaryUpgrade/Setup 0.54
248 TestStoppedBinaryUpgrade/Upgrade 130.31
249 TestNoKubernetes/serial/Start 57.99
250 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
251 TestNoKubernetes/serial/ProfileList 11.84
252 TestNoKubernetes/serial/Stop 1.34
253 TestNoKubernetes/serial/StartNoArgs 51.43
254 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
255 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
257 TestPause/serial/Start 159.34
272 TestNetworkPlugins/group/false 6.01
277 TestStartStop/group/old-k8s-version/serial/FirstStart 152.33
279 TestStartStop/group/no-preload/serial/FirstStart 119.8
280 TestPause/serial/SecondStartNoReconfiguration 42.06
281 TestPause/serial/Pause 0.81
282 TestPause/serial/VerifyStatus 0.27
283 TestPause/serial/Unpause 0.68
284 TestPause/serial/PauseAgain 0.94
285 TestPause/serial/DeletePaused 1.07
286 TestPause/serial/VerifyDeletedResources 0.54
288 TestStartStop/group/embed-certs/serial/FirstStart 104.9
289 TestStartStop/group/old-k8s-version/serial/DeployApp 8.58
290 TestStartStop/group/no-preload/serial/DeployApp 9.39
291 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.35
295 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
297 TestStartStop/group/embed-certs/serial/DeployApp 9.31
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
305 TestStartStop/group/old-k8s-version/serial/SecondStart 434.48
306 TestStartStop/group/no-preload/serial/SecondStart 903.07
308 TestStartStop/group/embed-certs/serial/SecondStart 832.08
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 827.67
320 TestStartStop/group/newest-cni/serial/FirstStart 63.05
321 TestNetworkPlugins/group/auto/Start 107.93
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.37
324 TestStartStop/group/newest-cni/serial/Stop 3.12
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
326 TestStartStop/group/newest-cni/serial/SecondStart 55.05
327 TestNetworkPlugins/group/kindnet/Start 72.88
328 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
331 TestStartStop/group/newest-cni/serial/Pause 2.96
332 TestNetworkPlugins/group/calico/Start 112.49
333 TestNetworkPlugins/group/auto/KubeletFlags 0.26
334 TestNetworkPlugins/group/auto/NetCatPod 13.29
335 TestNetworkPlugins/group/auto/DNS 0.23
336 TestNetworkPlugins/group/auto/Localhost 0.17
337 TestNetworkPlugins/group/auto/HairPin 0.18
338 TestNetworkPlugins/group/custom-flannel/Start 95.22
339 TestNetworkPlugins/group/enable-default-cni/Start 124.24
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
342 TestNetworkPlugins/group/kindnet/NetCatPod 12.31
343 TestNetworkPlugins/group/kindnet/DNS 0.2
344 TestNetworkPlugins/group/kindnet/Localhost 0.22
345 TestNetworkPlugins/group/kindnet/HairPin 0.19
346 TestNetworkPlugins/group/flannel/Start 99.38
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.29
349 TestNetworkPlugins/group/calico/NetCatPod 13.27
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.6
352 TestNetworkPlugins/group/calico/DNS 0.22
353 TestNetworkPlugins/group/calico/Localhost 0.21
354 TestNetworkPlugins/group/calico/HairPin 0.2
355 TestNetworkPlugins/group/custom-flannel/DNS 0.2
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
358 TestNetworkPlugins/group/bridge/Start 108.53
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
366 TestNetworkPlugins/group/flannel/NetCatPod 13.28
367 TestNetworkPlugins/group/flannel/DNS 0.16
368 TestNetworkPlugins/group/flannel/Localhost 0.15
369 TestNetworkPlugins/group/flannel/HairPin 0.16
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
371 TestNetworkPlugins/group/bridge/NetCatPod 11.23
372 TestNetworkPlugins/group/bridge/DNS 0.18
373 TestNetworkPlugins/group/bridge/Localhost 0.16
374 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (9.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-236740 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-236740 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.77229552s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-236740
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-236740: exit status 85 (74.970446ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |          |
	|         | -p download-only-236740        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:20.486562   16212 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:20.486713   16212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:20.486722   16212 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:20.486727   16212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:20.486924   16212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	W0213 21:56:20.487035   16212 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18171-8990/.minikube/config/config.json: open /home/jenkins/minikube-integration/18171-8990/.minikube/config/config.json: no such file or directory
	I0213 21:56:20.487626   16212 out.go:298] Setting JSON to true
	I0213 21:56:20.488545   16212 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2332,"bootTime":1707859049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:20.488609   16212 start.go:138] virtualization: kvm guest
	I0213 21:56:20.491259   16212 out.go:97] [download-only-236740] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:20.492862   16212 out.go:169] MINIKUBE_LOCATION=18171
	I0213 21:56:20.491467   16212 notify.go:220] Checking for updates...
	W0213 21:56:20.491432   16212 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball: no such file or directory
	I0213 21:56:20.495763   16212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:20.497368   16212 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:56:20.498789   16212 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:20.500185   16212 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 21:56:20.502436   16212 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 21:56:20.502686   16212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:20.605805   16212 out.go:97] Using the kvm2 driver based on user configuration
	I0213 21:56:20.605841   16212 start.go:298] selected driver: kvm2
	I0213 21:56:20.605848   16212 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:20.606242   16212 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:20.606387   16212 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:20.621427   16212 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:20.621509   16212 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:20.622041   16212 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0213 21:56:20.622189   16212 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 21:56:20.622240   16212 cni.go:84] Creating CNI manager for ""
	I0213 21:56:20.622254   16212 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:56:20.622265   16212 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:20.622285   16212 start_flags.go:321] config:
	{Name:download-only-236740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-236740 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:20.622493   16212 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:20.624716   16212 out.go:97] Downloading VM boot image ...
	I0213 21:56:20.624760   16212 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18171-8990/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0213 21:56:23.430311   16212 out.go:97] Starting control plane node download-only-236740 in cluster download-only-236740
	I0213 21:56:23.430338   16212 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 21:56:23.452645   16212 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0213 21:56:23.452696   16212 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:23.452871   16212 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0213 21:56:23.455049   16212 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0213 21:56:23.455083   16212 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0213 21:56:23.477646   16212 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-236740"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-236740
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-142558 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-142558 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.78882436s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-142558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-142558: exit status 85 (73.611564ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-236740        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-236740        | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only        | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-142558        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:30.623472   16376 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:30.623616   16376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:30.623624   16376 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:30.623629   16376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:30.623852   16376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 21:56:30.624411   16376 out.go:298] Setting JSON to true
	I0213 21:56:30.625259   16376 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2342,"bootTime":1707859049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:30.625324   16376 start.go:138] virtualization: kvm guest
	I0213 21:56:30.627977   16376 out.go:97] [download-only-142558] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:30.629794   16376 out.go:169] MINIKUBE_LOCATION=18171
	I0213 21:56:30.628125   16376 notify.go:220] Checking for updates...
	I0213 21:56:30.632994   16376 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:30.634645   16376 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:56:30.636368   16376 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:30.638005   16376 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-142558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-142558
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-452583 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-452583 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.486747614s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-452583
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-452583: exit status 85 (71.593601ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-236740           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-236740           | download-only-236740 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only           | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-142558           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| delete  | -p download-only-142558           | download-only-142558 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC | 13 Feb 24 21:56 UTC |
	| start   | -o=json --download-only           | download-only-452583 | jenkins | v1.32.0 | 13 Feb 24 21:56 UTC |                     |
	|         | -p download-only-452583           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/13 21:56:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0213 21:56:35.775193   16530 out.go:291] Setting OutFile to fd 1 ...
	I0213 21:56:35.775436   16530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:35.775444   16530 out.go:304] Setting ErrFile to fd 2...
	I0213 21:56:35.775448   16530 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 21:56:35.775653   16530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 21:56:35.776240   16530 out.go:298] Setting JSON to true
	I0213 21:56:35.777054   16530 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2347,"bootTime":1707859049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 21:56:35.777118   16530 start.go:138] virtualization: kvm guest
	I0213 21:56:35.779443   16530 out.go:97] [download-only-452583] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 21:56:35.781013   16530 out.go:169] MINIKUBE_LOCATION=18171
	I0213 21:56:35.779604   16530 notify.go:220] Checking for updates...
	I0213 21:56:35.783760   16530 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 21:56:35.785205   16530 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 21:56:35.786624   16530 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 21:56:35.787953   16530 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0213 21:56:35.790562   16530 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0213 21:56:35.790819   16530 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 21:56:35.822952   16530 out.go:97] Using the kvm2 driver based on user configuration
	I0213 21:56:35.822982   16530 start.go:298] selected driver: kvm2
	I0213 21:56:35.822987   16530 start.go:902] validating driver "kvm2" against <nil>
	I0213 21:56:35.823295   16530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:35.823386   16530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18171-8990/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0213 21:56:35.838306   16530 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0213 21:56:35.838368   16530 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0213 21:56:35.838913   16530 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0213 21:56:35.839058   16530 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0213 21:56:35.839114   16530 cni.go:84] Creating CNI manager for ""
	I0213 21:56:35.839129   16530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0213 21:56:35.839147   16530 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0213 21:56:35.839155   16530 start_flags.go:321] config:
	{Name:download-only-452583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-452583 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 21:56:35.839364   16530 iso.go:125] acquiring lock: {Name:mk545296e24c059052c28978eec23f85d38219ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0213 21:56:35.841233   16530 out.go:97] Starting control plane node download-only-452583 in cluster download-only-452583
	I0213 21:56:35.841250   16530 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 21:56:35.862356   16530 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0213 21:56:35.862387   16530 cache.go:56] Caching tarball of preloaded images
	I0213 21:56:35.862544   16530 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0213 21:56:35.864564   16530 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0213 21:56:35.864589   16530 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0213 21:56:35.889578   16530 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18171-8990/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-452583"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-452583
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-720567 --alsologtostderr --binary-mirror http://127.0.0.1:46241 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-720567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-720567
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (130.41s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-837894 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-837894 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m9.309732736s)
helpers_test.go:175: Cleaning up "offline-crio-837894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-837894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-837894: (1.097905599s)
--- PASS: TestOffline (130.41s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-548360
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-548360: exit status 85 (66.927234ms)

                                                
                                                
-- stdout --
	* Profile "addons-548360" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-548360"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-548360
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-548360: exit status 85 (65.496615ms)

                                                
                                                
-- stdout --
	* Profile "addons-548360" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-548360"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (155.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-548360 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-548360 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.813160947s)
--- PASS: TestAddons/Setup (155.81s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.32317ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-75mmv" [a146cfb0-9524-40f7-8bab-91a56de079a4] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005397475s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mfshx" [dad71134-5cc3-4fa4-b391-4a08b89d5d04] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006238977s
addons_test.go:340: (dbg) Run:  kubectl --context addons-548360 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-548360 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-548360 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.091868626s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 ip
2024/02/13 21:59:35 [DEBUG] GET http://192.168.39.217:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 addons disable registry --alsologtostderr -v=1: (1.610802408s)
--- PASS: TestAddons/parallel/Registry (15.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.34s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lmwqp" [294e9c71-e9b6-473b-8919-583150c72970] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006455985s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-548360
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-548360: (6.333628513s)
--- PASS: TestAddons/parallel/InspektorGadget (12.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 17.009713ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-ghxhg" [723e578e-19de-4bcf-86ed-9de4ffbe5650] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005185876s
addons_test.go:415: (dbg) Run:  kubectl --context addons-548360 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.02s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (20.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.941814ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-jn92b" [2a63d83e-5212-4e3e-9e40-0e87c7d8a741] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005470563s
addons_test.go:473: (dbg) Run:  kubectl --context addons-548360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-548360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.969425508s)
addons_test.go:478: kubectl --context addons-548360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-548360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-548360 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.849528066s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 addons disable helm-tiller --alsologtostderr -v=1: (1.643683418s)
--- PASS: TestAddons/parallel/HelmTiller (20.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 7.46182ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-548360 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-548360 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [56bc13ad-8f69-4373-8c30-a633083bc8db] Pending
helpers_test.go:344: "task-pv-pod" [56bc13ad-8f69-4373-8c30-a633083bc8db] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [56bc13ad-8f69-4373-8c30-a633083bc8db] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.008630072s
addons_test.go:584: (dbg) Run:  kubectl --context addons-548360 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-548360 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-548360 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-548360 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-548360 delete pod task-pv-pod: (1.289043473s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-548360 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-548360 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-548360 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e6e0d559-a9bb-42e5-a827-4be65eced4ea] Pending
helpers_test.go:344: "task-pv-pod-restore" [e6e0d559-a9bb-42e5-a827-4be65eced4ea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e6e0d559-a9bb-42e5-a827-4be65eced4ea] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004909015s
addons_test.go:626: (dbg) Run:  kubectl --context addons-548360 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-548360 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-548360 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.848619472s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-548360 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-548360 --alsologtostderr -v=1: (3.231483192s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-q5pn7" [bd852b29-93e3-40cd-a95a-6cdad295e4e8] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-q5pn7" [bd852b29-93e3-40cd-a95a-6cdad295e4e8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-q5pn7" [bd852b29-93e3-40cd-a95a-6cdad295e4e8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.005245068s
--- PASS: TestAddons/parallel/Headlamp (18.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-548360 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-548360 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [affdf2fa-5a34-4efc-9130-04c4dbf7aeb8] Pending
helpers_test.go:344: "test-local-path" [affdf2fa-5a34-4efc-9130-04c4dbf7aeb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [affdf2fa-5a34-4efc-9130-04c4dbf7aeb8] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [affdf2fa-5a34-4efc-9130-04c4dbf7aeb8] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004645313s
addons_test.go:891: (dbg) Run:  kubectl --context addons-548360 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 ssh "cat /opt/local-path-provisioner/pvc-94c1659d-c197-459f-ae81-0c70edc6f082_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-548360 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-548360 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-548360 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-548360 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.6688979s)
--- PASS: TestAddons/parallel/LocalPath (59.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mhcwx" [b9eec8df-b97e-4c67-9916-c51b3600b54b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005069363s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-548360
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-gmcgl" [a1dd1624-7e81-4306-8d34-c020ef448cac] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006655581s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-548360 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-548360 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (87.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-472714 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-472714 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m26.056828806s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-472714 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-472714 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-472714 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-472714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-472714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-472714: (1.183194137s)
--- PASS: TestCertOptions (87.78s)

                                                
                                    
x
+
TestCertExpiration (308.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-675174 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-675174 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m36.602575603s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-675174 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-675174 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.994848139s)
helpers_test.go:175: Cleaning up "cert-expiration-675174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-675174
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-675174: (1.187115587s)
--- PASS: TestCertExpiration (308.79s)

                                                
                                    
x
+
TestForceSystemdFlag (80.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-451444 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-451444 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.926165379s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-451444 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-451444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-451444
--- PASS: TestForceSystemdFlag (80.98s)

                                                
                                    
x
+
TestForceSystemdEnv (50.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-893752 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-893752 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.629566923s)
helpers_test.go:175: Cleaning up "force-systemd-env-893752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-893752
--- PASS: TestForceSystemdEnv (50.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.48s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.48s)

                                                
                                    
x
+
TestErrorSpam/setup (47.31s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-530305 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-530305 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-530305 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-530305 --driver=kvm2  --container-runtime=crio: (47.313923777s)
--- PASS: TestErrorSpam/setup (47.31s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (2.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 stop: (2.102248218s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-530305 --log_dir /tmp/nospam-530305 stop
--- PASS: TestErrorSpam/stop (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18171-8990/.minikube/files/etc/test/nested/copy/16200/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-407129 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-407129 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m40.115709207s)
--- PASS: TestFunctional/serial/StartWithProxy (100.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-407129 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-407129 --alsologtostderr -v=8: (39.351487469s)
functional_test.go:659: soft start took 39.352220486s for "functional-407129" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-407129 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 cache add registry.k8s.io/pause:3.1: (1.064664645s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 cache add registry.k8s.io/pause:3.3: (1.374294204s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 cache add registry.k8s.io/pause:latest: (1.102862487s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-407129 /tmp/TestFunctionalserialCacheCmdcacheadd_local2250606823/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cache add minikube-local-cache-test:functional-407129
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 cache add minikube-local-cache-test:functional-407129: (1.051123706s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cache delete minikube-local-cache-test:functional-407129
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-407129
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (252.949179ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 kubectl -- --context functional-407129 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-407129 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-407129 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-407129 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.477716101s)
functional_test.go:757: restart took 37.477834517s for "functional-407129" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-407129 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 logs: (1.543665328s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 logs --file /tmp/TestFunctionalserialLogsFileCmd2568871217/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 logs --file /tmp/TestFunctionalserialLogsFileCmd2568871217/001/logs.txt: (1.605927319s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-407129 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-407129
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-407129: exit status 115 (306.307209ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.127:30908 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-407129 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 config get cpus: exit status 14 (90.595901ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 config get cpus: exit status 14 (75.96771ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-407129 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-407129 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23185: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-407129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-407129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.778259ms)

                                                
                                                
-- stdout --
	* [functional-407129] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:09:11.531818   22738 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:09:11.532098   22738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:09:11.532108   22738 out.go:304] Setting ErrFile to fd 2...
	I0213 22:09:11.532113   22738 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:09:11.532277   22738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 22:09:11.532798   22738 out.go:298] Setting JSON to false
	I0213 22:09:11.533678   22738 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3103,"bootTime":1707859049,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:09:11.533767   22738 start.go:138] virtualization: kvm guest
	I0213 22:09:11.535877   22738 out.go:177] * [functional-407129] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 22:09:11.537706   22738 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:09:11.537581   22738 notify.go:220] Checking for updates...
	I0213 22:09:11.539181   22738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:09:11.540646   22738 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:09:11.541949   22738 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:09:11.543393   22738 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:09:11.545008   22738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:09:11.547162   22738 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:09:11.547600   22738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:09:11.547689   22738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:09:11.564978   22738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39445
	I0213 22:09:11.565373   22738 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:09:11.566037   22738 main.go:141] libmachine: Using API Version  1
	I0213 22:09:11.566074   22738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:09:11.566410   22738 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:09:11.566608   22738 main.go:141] libmachine: (functional-407129) Calling .DriverName
	I0213 22:09:11.566844   22738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:09:11.567219   22738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:09:11.567285   22738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:09:11.587870   22738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0213 22:09:11.588313   22738 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:09:11.589027   22738 main.go:141] libmachine: Using API Version  1
	I0213 22:09:11.589052   22738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:09:11.589467   22738 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:09:11.589670   22738 main.go:141] libmachine: (functional-407129) Calling .DriverName
	I0213 22:09:11.628935   22738 out.go:177] * Using the kvm2 driver based on existing profile
	I0213 22:09:11.630403   22738 start.go:298] selected driver: kvm2
	I0213 22:09:11.630423   22738 start.go:902] validating driver "kvm2" against &{Name:functional-407129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-407129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.127 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:09:11.630539   22738 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:09:11.632723   22738 out.go:177] 
	W0213 22:09:11.634244   22738 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0213 22:09:11.635583   22738 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-407129 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-407129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-407129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.296387ms)

                                                
                                                
-- stdout --
	* [functional-407129] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:09:11.379876   22680 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:09:11.380028   22680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:09:11.380038   22680 out.go:304] Setting ErrFile to fd 2...
	I0213 22:09:11.380043   22680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:09:11.380354   22680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 22:09:11.380868   22680 out.go:298] Setting JSON to false
	I0213 22:09:11.381739   22680 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3103,"bootTime":1707859049,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:09:11.381814   22680 start.go:138] virtualization: kvm guest
	I0213 22:09:11.384513   22680 out.go:177] * [functional-407129] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0213 22:09:11.386075   22680 notify.go:220] Checking for updates...
	I0213 22:09:11.386094   22680 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:09:11.387957   22680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:09:11.389565   22680 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:09:11.390960   22680 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:09:11.392260   22680 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:09:11.393730   22680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:09:11.395606   22680 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:09:11.396359   22680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:09:11.396418   22680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:09:11.411675   22680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37557
	I0213 22:09:11.412033   22680 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:09:11.412594   22680 main.go:141] libmachine: Using API Version  1
	I0213 22:09:11.412620   22680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:09:11.412955   22680 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:09:11.413192   22680 main.go:141] libmachine: (functional-407129) Calling .DriverName
	I0213 22:09:11.413407   22680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:09:11.413695   22680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:09:11.413735   22680 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:09:11.429024   22680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45143
	I0213 22:09:11.429583   22680 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:09:11.430082   22680 main.go:141] libmachine: Using API Version  1
	I0213 22:09:11.430105   22680 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:09:11.430452   22680 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:09:11.430610   22680 main.go:141] libmachine: (functional-407129) Calling .DriverName
	I0213 22:09:11.466369   22680 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0213 22:09:11.467747   22680 start.go:298] selected driver: kvm2
	I0213 22:09:11.467760   22680 start.go:902] validating driver "kvm2" against &{Name:functional-407129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-407129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.127 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0213 22:09:11.467889   22680 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:09:11.470103   22680 out.go:177] 
	W0213 22:09:11.471379   22680 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0213 22:09:11.472512   22680 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-407129 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-407129 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-r4v64" [34b9ac34-e2a3-4c32-8489-1317c63a0d79] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-r4v64" [34b9ac34-e2a3-4c32-8489-1317c63a0d79] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005599684s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.127:31930
functional_test.go:1671: http://192.168.50.127:31930: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-r4v64

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.127:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.127:31930
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4cf17f44-5a13-44b6-84c2-a25fa532fe88] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004941618s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-407129 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-407129 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-407129 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-407129 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-407129 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a507aa2e-22e9-4932-b118-56fb4fdbb931] Pending
E0213 22:09:21.413399   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:21.419448   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:21.429746   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:21.450049   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:21.490353   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:21.570691   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [a507aa2e-22e9-4932-b118-56fb4fdbb931] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0213 22:09:21.731572   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:22.051865   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:09:22.693047   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [a507aa2e-22e9-4932-b118-56fb4fdbb931] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004416955s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-407129 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-407129 delete -f testdata/storage-provisioner/pod.yaml
E0213 22:09:41.895696   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-407129 delete -f testdata/storage-provisioner/pod.yaml: (1.263282716s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-407129 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4e65ee16-302e-46b5-a0f0-ce0664798ded] Pending
helpers_test.go:344: "sp-pod" [4e65ee16-302e-46b5-a0f0-ce0664798ded] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4e65ee16-302e-46b5-a0f0-ce0664798ded] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.006156902s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-407129 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh -n functional-407129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cp functional-407129:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3264911483/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh -n functional-407129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh -n functional-407129 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-407129 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mrpr2" [1133cb0e-3210-4fdc-a28e-5b21812e3526] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mrpr2" [1133cb0e-3210-4fdc-a28e-5b21812e3526] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004418286s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-407129 exec mysql-859648c796-mrpr2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-407129 exec mysql-859648c796-mrpr2 -- mysql -ppassword -e "show databases;": exit status 1 (153.688263ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-407129 exec mysql-859648c796-mrpr2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-407129 exec mysql-859648c796-mrpr2 -- mysql -ppassword -e "show databases;": exit status 1 (198.155273ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-407129 exec mysql-859648c796-mrpr2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.92s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16200/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /etc/test/nested/copy/16200/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16200.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /etc/ssl/certs/16200.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16200.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /usr/share/ca-certificates/16200.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/162002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /etc/ssl/certs/162002.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/162002.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /usr/share/ca-certificates/162002.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-407129 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh "sudo systemctl is-active docker": exit status 1 (399.480581ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh "sudo systemctl is-active containerd": exit status 1 (256.524912ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-407129 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-407129 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nwfrs" [fa6e484a-f567-402b-b75d-16dccee43e5f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nwfrs" [fa6e484a-f567-402b-b75d-16dccee43e5f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.258213419s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "255.199523ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "67.948736ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "229.732104ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "62.368894ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-407129 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-407129
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-407129
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-407129 image ls --format short --alsologtostderr:
I0213 22:09:57.878394   24922 out.go:291] Setting OutFile to fd 1 ...
I0213 22:09:57.878553   24922 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:57.878564   24922 out.go:304] Setting ErrFile to fd 2...
I0213 22:09:57.878571   24922 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:57.878792   24922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
I0213 22:09:57.879435   24922 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:57.879554   24922 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:57.879956   24922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:57.880015   24922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:57.895566   24922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
I0213 22:09:57.896035   24922 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:57.896719   24922 main.go:141] libmachine: Using API Version  1
I0213 22:09:57.896749   24922 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:57.897167   24922 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:57.897351   24922 main.go:141] libmachine: (functional-407129) Calling .GetState
I0213 22:09:57.899521   24922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:57.899570   24922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:57.914756   24922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
I0213 22:09:57.915169   24922 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:57.915650   24922 main.go:141] libmachine: Using API Version  1
I0213 22:09:57.915672   24922 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:57.916039   24922 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:57.916243   24922 main.go:141] libmachine: (functional-407129) Calling .DriverName
I0213 22:09:57.916464   24922 ssh_runner.go:195] Run: systemctl --version
I0213 22:09:57.916496   24922 main.go:141] libmachine: (functional-407129) Calling .GetSSHHostname
I0213 22:09:57.919691   24922 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:57.920113   24922 main.go:141] libmachine: (functional-407129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:05:30", ip: ""} in network mk-functional-407129: {Iface:virbr1 ExpiryTime:2024-02-13 23:06:15 +0000 UTC Type:0 Mac:52:54:00:57:05:30 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:functional-407129 Clientid:01:52:54:00:57:05:30}
I0213 22:09:57.920149   24922 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined IP address 192.168.50.127 and MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:57.920325   24922 main.go:141] libmachine: (functional-407129) Calling .GetSSHPort
I0213 22:09:57.920516   24922 main.go:141] libmachine: (functional-407129) Calling .GetSSHKeyPath
I0213 22:09:57.920638   24922 main.go:141] libmachine: (functional-407129) Calling .GetSSHUsername
I0213 22:09:57.920761   24922 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/functional-407129/id_rsa Username:docker}
I0213 22:09:58.056186   24922 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:09:58.174282   24922 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.174306   24922 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.174605   24922 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.174626   24922 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:09:58.174644   24922 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.174654   24922 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.174974   24922 main.go:141] libmachine: (functional-407129) DBG | Closing plugin on server side
I0213 22:09:58.175005   24922 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.175014   24922 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-407129 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-407129  | 106148414b5cf | 3.35kB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| gcr.io/google-containers/addon-resizer  | functional-407129  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | 247f7abff9f70 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-407129 image ls --format table --alsologtostderr:
I0213 22:09:58.505641   25051 out.go:291] Setting OutFile to fd 1 ...
I0213 22:09:58.505764   25051 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.505773   25051 out.go:304] Setting ErrFile to fd 2...
I0213 22:09:58.505780   25051 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.506016   25051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
I0213 22:09:58.506596   25051 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.506718   25051 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.507148   25051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.507196   25051 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.523153   25051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
I0213 22:09:58.523545   25051 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.524212   25051 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.524236   25051 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.524690   25051 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.524885   25051 main.go:141] libmachine: (functional-407129) Calling .GetState
I0213 22:09:58.526648   25051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.526684   25051 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.543689   25051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
I0213 22:09:58.544097   25051 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.544469   25051 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.544492   25051 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.544893   25051 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.545045   25051 main.go:141] libmachine: (functional-407129) Calling .DriverName
I0213 22:09:58.545221   25051 ssh_runner.go:195] Run: systemctl --version
I0213 22:09:58.545244   25051 main.go:141] libmachine: (functional-407129) Calling .GetSSHHostname
I0213 22:09:58.547631   25051 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.548056   25051 main.go:141] libmachine: (functional-407129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:05:30", ip: ""} in network mk-functional-407129: {Iface:virbr1 ExpiryTime:2024-02-13 23:06:15 +0000 UTC Type:0 Mac:52:54:00:57:05:30 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:functional-407129 Clientid:01:52:54:00:57:05:30}
I0213 22:09:58.548081   25051 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined IP address 192.168.50.127 and MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.548238   25051 main.go:141] libmachine: (functional-407129) Calling .GetSSHPort
I0213 22:09:58.548355   25051 main.go:141] libmachine: (functional-407129) Calling .GetSSHKeyPath
I0213 22:09:58.548442   25051 main.go:141] libmachine: (functional-407129) Calling .GetSSHUsername
I0213 22:09:58.548516   25051 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/functional-407129/id_rsa Username:docker}
I0213 22:09:58.646908   25051 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:09:58.714164   25051 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.714191   25051 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.714472   25051 main.go:141] libmachine: (functional-407129) DBG | Closing plugin on server side
I0213 22:09:58.714494   25051 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.714508   25051 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:09:58.714518   25051 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.714525   25051 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.714745   25051 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.714757   25051 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-407129 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-407129"],"size":"34114467"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05","repoDigests":["docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938","docker.io/library/nginx@sha256:b41c95c4080d503eac2e455a47280079c5905c6281a1a5ee8fe75b52a92b35a0"],"repoTags":["docker.io/library/nginx:latest"],"size":"190871348"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@
sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"106148414b5cf4323ac74882e98110d129963c54dbd4a7467fecf1d687ef3178","repoDigests":["localhost/minikube-local-cache-test@sha256:f22f36aaf219bb61edca7b7bbd972b16b0e225985d8e903391f80daf523d96af"],"repoTags":["localhost/minikube-local-cache-test:functional-407129"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade
8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e3
3898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f350
95b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd
84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.
k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-407129 image ls --format json --alsologtostderr:
I0213 22:09:58.473978   25035 out.go:291] Setting OutFile to fd 1 ...
I0213 22:09:58.474298   25035 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.474320   25035 out.go:304] Setting ErrFile to fd 2...
I0213 22:09:58.474328   25035 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.474575   25035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
I0213 22:09:58.475483   25035 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.475690   25035 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.476319   25035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.476389   25035 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.491951   25035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33167
I0213 22:09:58.492387   25035 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.492994   25035 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.493028   25035 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.493440   25035 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.493670   25035 main.go:141] libmachine: (functional-407129) Calling .GetState
I0213 22:09:58.495721   25035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.495766   25035 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.512987   25035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39097
I0213 22:09:58.513473   25035 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.513980   25035 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.514364   25035 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.514697   25035 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.514901   25035 main.go:141] libmachine: (functional-407129) Calling .DriverName
I0213 22:09:58.515118   25035 ssh_runner.go:195] Run: systemctl --version
I0213 22:09:58.515151   25035 main.go:141] libmachine: (functional-407129) Calling .GetSSHHostname
I0213 22:09:58.518111   25035 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.518480   25035 main.go:141] libmachine: (functional-407129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:05:30", ip: ""} in network mk-functional-407129: {Iface:virbr1 ExpiryTime:2024-02-13 23:06:15 +0000 UTC Type:0 Mac:52:54:00:57:05:30 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:functional-407129 Clientid:01:52:54:00:57:05:30}
I0213 22:09:58.518514   25035 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined IP address 192.168.50.127 and MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.518640   25035 main.go:141] libmachine: (functional-407129) Calling .GetSSHPort
I0213 22:09:58.518797   25035 main.go:141] libmachine: (functional-407129) Calling .GetSSHKeyPath
I0213 22:09:58.518997   25035 main.go:141] libmachine: (functional-407129) Calling .GetSSHUsername
I0213 22:09:58.519159   25035 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/functional-407129/id_rsa Username:docker}
I0213 22:09:58.616398   25035 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:09:58.689139   25035 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.689157   25035 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.689425   25035 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.689450   25035 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:09:58.689476   25035 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.689486   25035 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.689729   25035 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.689743   25035 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-407129 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 247f7abff9f7097bbdab57df76fedd124d1e24a6ec4944fb5ef0ad128997ce05
repoDigests:
- docker.io/library/nginx@sha256:0e1330510a8e57568e7e908b27a50658ae84de9e9f907647cb4628fbc799f938
- docker.io/library/nginx@sha256:b41c95c4080d503eac2e455a47280079c5905c6281a1a5ee8fe75b52a92b35a0
repoTags:
- docker.io/library/nginx:latest
size: "190871348"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 106148414b5cf4323ac74882e98110d129963c54dbd4a7467fecf1d687ef3178
repoDigests:
- localhost/minikube-local-cache-test@sha256:f22f36aaf219bb61edca7b7bbd972b16b0e225985d8e903391f80daf523d96af
repoTags:
- localhost/minikube-local-cache-test:functional-407129
size: "3345"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-407129
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-407129 image ls --format yaml --alsologtostderr:
I0213 22:09:58.197382   24966 out.go:291] Setting OutFile to fd 1 ...
I0213 22:09:58.197526   24966 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.197542   24966 out.go:304] Setting ErrFile to fd 2...
I0213 22:09:58.197550   24966 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.197859   24966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
I0213 22:09:58.198838   24966 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.198993   24966 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.199575   24966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.199640   24966 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.216311   24966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43327
I0213 22:09:58.216819   24966 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.217393   24966 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.217412   24966 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.217914   24966 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.218082   24966 main.go:141] libmachine: (functional-407129) Calling .GetState
I0213 22:09:58.219957   24966 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.220019   24966 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.235418   24966 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
I0213 22:09:58.235921   24966 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.236456   24966 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.236485   24966 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.236821   24966 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.237103   24966 main.go:141] libmachine: (functional-407129) Calling .DriverName
I0213 22:09:58.237336   24966 ssh_runner.go:195] Run: systemctl --version
I0213 22:09:58.237359   24966 main.go:141] libmachine: (functional-407129) Calling .GetSSHHostname
I0213 22:09:58.240223   24966 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.240587   24966 main.go:141] libmachine: (functional-407129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:05:30", ip: ""} in network mk-functional-407129: {Iface:virbr1 ExpiryTime:2024-02-13 23:06:15 +0000 UTC Type:0 Mac:52:54:00:57:05:30 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:functional-407129 Clientid:01:52:54:00:57:05:30}
I0213 22:09:58.240616   24966 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined IP address 192.168.50.127 and MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.240926   24966 main.go:141] libmachine: (functional-407129) Calling .GetSSHPort
I0213 22:09:58.241126   24966 main.go:141] libmachine: (functional-407129) Calling .GetSSHKeyPath
I0213 22:09:58.241280   24966 main.go:141] libmachine: (functional-407129) Calling .GetSSHUsername
I0213 22:09:58.241425   24966 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/functional-407129/id_rsa Username:docker}
I0213 22:09:58.347259   24966 ssh_runner.go:195] Run: sudo crictl images --output json
I0213 22:09:58.423771   24966 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.423789   24966 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.424090   24966 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.424102   24966 main.go:141] libmachine: (functional-407129) DBG | Closing plugin on server side
I0213 22:09:58.424134   24966 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:09:58.424152   24966 main.go:141] libmachine: Making call to close driver server
I0213 22:09:58.424162   24966 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:09:58.424391   24966 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:09:58.424409   24966 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh pgrep buildkitd: exit status 1 (248.290078ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image build -t localhost/my-image:functional-407129 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image build -t localhost/my-image:functional-407129 testdata/build --alsologtostderr: (2.514684359s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-407129 image build -t localhost/my-image:functional-407129 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3a0a867a1de
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-407129
--> 4b72cd64b80
Successfully tagged localhost/my-image:functional-407129
4b72cd64b807cd40119f448ee519639f086de833537bc85655ff6f18b25d8223
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-407129 image build -t localhost/my-image:functional-407129 testdata/build --alsologtostderr:
I0213 22:09:58.501388   25045 out.go:291] Setting OutFile to fd 1 ...
I0213 22:09:58.501616   25045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.501639   25045 out.go:304] Setting ErrFile to fd 2...
I0213 22:09:58.501649   25045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0213 22:09:58.502020   25045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
I0213 22:09:58.503594   25045 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.504575   25045 config.go:182] Loaded profile config "functional-407129": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0213 22:09:58.505173   25045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.505231   25045 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.520804   25045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39307
I0213 22:09:58.521446   25045 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.522057   25045 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.522083   25045 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.522539   25045 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.522693   25045 main.go:141] libmachine: (functional-407129) Calling .GetState
I0213 22:09:58.524938   25045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0213 22:09:58.524982   25045 main.go:141] libmachine: Launching plugin server for driver kvm2
I0213 22:09:58.539856   25045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
I0213 22:09:58.540223   25045 main.go:141] libmachine: () Calling .GetVersion
I0213 22:09:58.540728   25045 main.go:141] libmachine: Using API Version  1
I0213 22:09:58.540778   25045 main.go:141] libmachine: () Calling .SetConfigRaw
I0213 22:09:58.541126   25045 main.go:141] libmachine: () Calling .GetMachineName
I0213 22:09:58.541301   25045 main.go:141] libmachine: (functional-407129) Calling .DriverName
I0213 22:09:58.541492   25045 ssh_runner.go:195] Run: systemctl --version
I0213 22:09:58.541515   25045 main.go:141] libmachine: (functional-407129) Calling .GetSSHHostname
I0213 22:09:58.544660   25045 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.545166   25045 main.go:141] libmachine: (functional-407129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:05:30", ip: ""} in network mk-functional-407129: {Iface:virbr1 ExpiryTime:2024-02-13 23:06:15 +0000 UTC Type:0 Mac:52:54:00:57:05:30 Iaid: IPaddr:192.168.50.127 Prefix:24 Hostname:functional-407129 Clientid:01:52:54:00:57:05:30}
I0213 22:09:58.545193   25045 main.go:141] libmachine: (functional-407129) DBG | domain functional-407129 has defined IP address 192.168.50.127 and MAC address 52:54:00:57:05:30 in network mk-functional-407129
I0213 22:09:58.545230   25045 main.go:141] libmachine: (functional-407129) Calling .GetSSHPort
I0213 22:09:58.545347   25045 main.go:141] libmachine: (functional-407129) Calling .GetSSHKeyPath
I0213 22:09:58.545486   25045 main.go:141] libmachine: (functional-407129) Calling .GetSSHUsername
I0213 22:09:58.545956   25045 sshutil.go:53] new ssh client: &{IP:192.168.50.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/functional-407129/id_rsa Username:docker}
I0213 22:09:58.644357   25045 build_images.go:151] Building image from path: /tmp/build.1051060060.tar
I0213 22:09:58.644511   25045 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0213 22:09:58.662997   25045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1051060060.tar
I0213 22:09:58.675340   25045 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1051060060.tar: stat -c "%s %y" /var/lib/minikube/build/build.1051060060.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1051060060.tar': No such file or directory
I0213 22:09:58.675382   25045 ssh_runner.go:362] scp /tmp/build.1051060060.tar --> /var/lib/minikube/build/build.1051060060.tar (3072 bytes)
I0213 22:09:58.729118   25045 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1051060060
I0213 22:09:58.738446   25045 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1051060060 -xf /var/lib/minikube/build/build.1051060060.tar
I0213 22:09:58.747694   25045 crio.go:297] Building image: /var/lib/minikube/build/build.1051060060
I0213 22:09:58.747768   25045 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-407129 /var/lib/minikube/build/build.1051060060 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0213 22:10:00.909145   25045 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-407129 /var/lib/minikube/build/build.1051060060 --cgroup-manager=cgroupfs: (2.16134725s)
I0213 22:10:00.909250   25045 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1051060060
I0213 22:10:00.928259   25045 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1051060060.tar
I0213 22:10:00.938752   25045 build_images.go:207] Built localhost/my-image:functional-407129 from /tmp/build.1051060060.tar
I0213 22:10:00.938802   25045 build_images.go:123] succeeded building to: functional-407129
I0213 22:10:00.938808   25045 build_images.go:124] failed building to: 
I0213 22:10:00.938830   25045 main.go:141] libmachine: Making call to close driver server
I0213 22:10:00.938855   25045 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:10:00.939181   25045 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:10:00.939203   25045 main.go:141] libmachine: Making call to close connection to plugin binary
I0213 22:10:00.939214   25045 main.go:141] libmachine: Making call to close driver server
I0213 22:10:00.939224   25045 main.go:141] libmachine: (functional-407129) Calling .Close
I0213 22:10:00.939473   25045 main.go:141] libmachine: Successfully made call to close driver server
I0213 22:10:00.939493   25045 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-407129
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image load --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image load --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr: (4.477145244s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image load --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image load --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr: (2.668075647s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 service list
E0213 22:09:26.534583   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 service list -o json
functional_test.go:1490: Took "591.810184ms" to run "out/minikube-linux-amd64 -p functional-407129 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.127:32523
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.127:32523
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdany-port4115823253/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707862170967576456" to /tmp/TestFunctionalparallelMountCmdany-port4115823253/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707862170967576456" to /tmp/TestFunctionalparallelMountCmdany-port4115823253/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707862170967576456" to /tmp/TestFunctionalparallelMountCmdany-port4115823253/001/test-1707862170967576456
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.551058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0213 22:09:31.655526   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh -- ls -la /mount-9p
2024/02/13 22:09:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 13 22:09 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 13 22:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 13 22:09 test-1707862170967576456
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh cat /mount-9p/test-1707862170967576456
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-407129 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [57ff2448-d202-47b4-908b-095e44fd7d1f] Pending
helpers_test.go:344: "busybox-mount" [57ff2448-d202-47b4-908b-095e44fd7d1f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [57ff2448-d202-47b4-908b-095e44fd7d1f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [57ff2448-d202-47b4-908b-095e44fd7d1f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 21.005235684s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-407129 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdany-port4115823253/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image save gcr.io/google-containers/addon-resizer:functional-407129 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image save gcr.io/google-containers/addon-resizer:functional-407129 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (3.523513177s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image rm gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.994814937s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-407129
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 image save --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-407129 image save --daemon gcr.io/google-containers/addon-resizer:functional-407129 --alsologtostderr: (7.511459227s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-407129
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdspecific-port3502437313/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (242.898062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdspecific-port3502437313/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-407129 ssh "sudo umount -f /mount-9p": exit status 1 (217.423303ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-407129 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdspecific-port3502437313/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-407129 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-407129 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-407129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup336795474/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.80s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-407129
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-407129
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-407129
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (107.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-741217 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0213 22:10:43.336949   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-741217 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m47.146825656s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (107.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons enable ingress --alsologtostderr -v=5: (13.456700674s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-741217 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-574086 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0213 22:15:33.061394   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-574086 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m30.085499455s)
--- PASS: TestJSONOutput/start/Command (90.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-574086 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-574086 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-574086 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-574086 --output=json --user=testUser: (7.102966427s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-311531 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-311531 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.486175ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d979c5e-0d54-430a-8120-59efca4c16cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-311531] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6b0b2b6-0e36-455c-80a3-9d9d1d126078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18171"}}
	{"specversion":"1.0","id":"680275f7-6e38-47c7-91f4-1f2b066f5b8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"681e9871-475d-4f15-b46e-327cecb6ddea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig"}}
	{"specversion":"1.0","id":"ed6aa0ed-cc22-4381-946d-9b9e78650ade","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube"}}
	{"specversion":"1.0","id":"09396554-f0a1-47d9-9b43-eea0269fe8ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d6c417c5-370a-467a-b1c7-f01e2d0eb9d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5736fc3d-7146-4262-96f8-da55ab5e0280","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-311531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-311531
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (99.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-398264 --driver=kvm2  --container-runtime=crio
E0213 22:16:54.982189   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:17:03.710047   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:03.715368   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:03.725678   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:03.746011   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:03.786320   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:03.866736   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:04.027184   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:04.347841   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:04.988853   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:06.269589   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:08.831418   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:13.952180   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-398264 --driver=kvm2  --container-runtime=crio: (48.480152807s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-401329 --driver=kvm2  --container-runtime=crio
E0213 22:17:24.192602   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:17:44.673744   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-401329 --driver=kvm2  --container-runtime=crio: (47.710826001s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-398264
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-401329
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-401329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-401329
helpers_test.go:175: Cleaning up "first-398264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-398264
--- PASS: TestMinikubeProfile (99.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-660584 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0213 22:18:25.634884   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-660584 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.393928922s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-660584 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-660584 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-680234 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-680234 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.068031733s)
E0213 22:19:11.137905   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (30.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-680234 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-680234 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-660584 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-680234 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-680234 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-680234
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-680234: (1.203353774s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-680234
E0213 22:19:21.413862   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-680234: (22.375579947s)
--- PASS: TestMountStart/serial/RestartStopped (23.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-680234 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-680234 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-413653 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0213 22:19:47.555785   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-413653 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.054655378s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-413653 -- rollout status deployment/busybox: (3.821667103s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-2lg9w -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-w6ghx -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-2lg9w -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-w6ghx -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-2lg9w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-w6ghx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-2lg9w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-2lg9w -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-w6ghx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-413653 -- exec busybox-5b5d89c9d6-w6ghx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-413653 -v 3 --alsologtostderr
E0213 22:22:03.709898   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-413653 -v 3 --alsologtostderr: (42.524032325s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.14s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-413653 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp testdata/cp-test.txt multinode-413653:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3140751563/001/cp-test_multinode-413653.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653:/home/docker/cp-test.txt multinode-413653-m02:/home/docker/cp-test_multinode-413653_multinode-413653-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m02 "sudo cat /home/docker/cp-test_multinode-413653_multinode-413653-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653:/home/docker/cp-test.txt multinode-413653-m03:/home/docker/cp-test_multinode-413653_multinode-413653-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m03 "sudo cat /home/docker/cp-test_multinode-413653_multinode-413653-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp testdata/cp-test.txt multinode-413653-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3140751563/001/cp-test_multinode-413653-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653-m02:/home/docker/cp-test.txt multinode-413653:/home/docker/cp-test_multinode-413653-m02_multinode-413653.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653 "sudo cat /home/docker/cp-test_multinode-413653-m02_multinode-413653.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653-m02:/home/docker/cp-test.txt multinode-413653-m03:/home/docker/cp-test_multinode-413653-m02_multinode-413653-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m03 "sudo cat /home/docker/cp-test_multinode-413653-m02_multinode-413653-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp testdata/cp-test.txt multinode-413653-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3140751563/001/cp-test_multinode-413653-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653-m03:/home/docker/cp-test.txt multinode-413653:/home/docker/cp-test_multinode-413653-m03_multinode-413653.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653 "sudo cat /home/docker/cp-test_multinode-413653-m03_multinode-413653.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 cp multinode-413653-m03:/home/docker/cp-test.txt multinode-413653-m02:/home/docker/cp-test_multinode-413653-m03_multinode-413653-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 ssh -n multinode-413653-m02 "sudo cat /home/docker/cp-test_multinode-413653-m03_multinode-413653-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-413653 node stop m03: (2.092268361s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-413653 status: exit status 7 (479.812278ms)

                                                
                                                
-- stdout --
	multinode-413653
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-413653-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-413653-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status --alsologtostderr
E0213 22:22:31.396498   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-413653 status --alsologtostderr: exit status 7 (459.63609ms)

                                                
                                                
-- stdout --
	multinode-413653
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-413653-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-413653-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:22:31.230705   32224 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:22:31.230969   32224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:22:31.230979   32224 out.go:304] Setting ErrFile to fd 2...
	I0213 22:22:31.230984   32224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:22:31.231177   32224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 22:22:31.231375   32224 out.go:298] Setting JSON to false
	I0213 22:22:31.231407   32224 mustload.go:65] Loading cluster: multinode-413653
	I0213 22:22:31.231520   32224 notify.go:220] Checking for updates...
	I0213 22:22:31.231969   32224 config.go:182] Loaded profile config "multinode-413653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:22:31.231991   32224 status.go:255] checking status of multinode-413653 ...
	I0213 22:22:31.232489   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.232569   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.251000   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46283
	I0213 22:22:31.251457   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.252163   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.252203   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.252563   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.252797   32224 main.go:141] libmachine: (multinode-413653) Calling .GetState
	I0213 22:22:31.254568   32224 status.go:330] multinode-413653 host status = "Running" (err=<nil>)
	I0213 22:22:31.254592   32224 host.go:66] Checking if "multinode-413653" exists ...
	I0213 22:22:31.254937   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.254971   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.269237   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0213 22:22:31.269633   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.270174   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.270201   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.270518   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.270700   32224 main.go:141] libmachine: (multinode-413653) Calling .GetIP
	I0213 22:22:31.273421   32224 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:22:31.273840   32224 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:22:31.273887   32224 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:22:31.274057   32224 host.go:66] Checking if "multinode-413653" exists ...
	I0213 22:22:31.274369   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.274403   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.289531   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I0213 22:22:31.289911   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.290323   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.290343   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.290625   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.290787   32224 main.go:141] libmachine: (multinode-413653) Calling .DriverName
	I0213 22:22:31.290974   32224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 22:22:31.291001   32224 main.go:141] libmachine: (multinode-413653) Calling .GetSSHHostname
	I0213 22:22:31.293522   32224 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:22:31.294002   32224 main.go:141] libmachine: (multinode-413653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:d7:5b", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:19:55 +0000 UTC Type:0 Mac:52:54:00:cc:d7:5b Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-413653 Clientid:01:52:54:00:cc:d7:5b}
	I0213 22:22:31.294036   32224 main.go:141] libmachine: (multinode-413653) DBG | domain multinode-413653 has defined IP address 192.168.39.81 and MAC address 52:54:00:cc:d7:5b in network mk-multinode-413653
	I0213 22:22:31.294173   32224 main.go:141] libmachine: (multinode-413653) Calling .GetSSHPort
	I0213 22:22:31.294342   32224 main.go:141] libmachine: (multinode-413653) Calling .GetSSHKeyPath
	I0213 22:22:31.294490   32224 main.go:141] libmachine: (multinode-413653) Calling .GetSSHUsername
	I0213 22:22:31.294636   32224 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653/id_rsa Username:docker}
	I0213 22:22:31.389779   32224 ssh_runner.go:195] Run: systemctl --version
	I0213 22:22:31.395337   32224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:22:31.410111   32224 kubeconfig.go:92] found "multinode-413653" server: "https://192.168.39.81:8443"
	I0213 22:22:31.410137   32224 api_server.go:166] Checking apiserver status ...
	I0213 22:22:31.410174   32224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0213 22:22:31.423028   32224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	I0213 22:22:31.432526   32224 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podcba523270c42e30a16923f778faad5a9/crio-0958e51d618e6e087fc31d14635bb12a5e60cb3560ba0557cbfe5772b2add60b"
	I0213 22:22:31.432595   32224 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcba523270c42e30a16923f778faad5a9/crio-0958e51d618e6e087fc31d14635bb12a5e60cb3560ba0557cbfe5772b2add60b/freezer.state
	I0213 22:22:31.443228   32224 api_server.go:204] freezer state: "THAWED"
	I0213 22:22:31.443262   32224 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	I0213 22:22:31.448633   32224 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I0213 22:22:31.448659   32224 status.go:421] multinode-413653 apiserver status = Running (err=<nil>)
	I0213 22:22:31.448671   32224 status.go:257] multinode-413653 status: &{Name:multinode-413653 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0213 22:22:31.448691   32224 status.go:255] checking status of multinode-413653-m02 ...
	I0213 22:22:31.449006   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.449060   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.464049   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0213 22:22:31.464461   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.464915   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.464945   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.465238   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.465414   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .GetState
	I0213 22:22:31.466969   32224 status.go:330] multinode-413653-m02 host status = "Running" (err=<nil>)
	I0213 22:22:31.466984   32224 host.go:66] Checking if "multinode-413653-m02" exists ...
	I0213 22:22:31.467351   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.467414   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.481688   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0213 22:22:31.482070   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.482470   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.482489   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.482803   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.482960   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .GetIP
	I0213 22:22:31.485643   32224 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:22:31.486145   32224 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:22:31.486171   32224 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:22:31.486245   32224 host.go:66] Checking if "multinode-413653-m02" exists ...
	I0213 22:22:31.486535   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.486569   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.501346   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0213 22:22:31.501734   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.502215   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.502234   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.502512   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.502691   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .DriverName
	I0213 22:22:31.502871   32224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0213 22:22:31.502888   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHHostname
	I0213 22:22:31.505576   32224 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:22:31.505969   32224 main.go:141] libmachine: (multinode-413653-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:a6:d1", ip: ""} in network mk-multinode-413653: {Iface:virbr1 ExpiryTime:2024-02-13 23:21:05 +0000 UTC Type:0 Mac:52:54:00:d5:a6:d1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-413653-m02 Clientid:01:52:54:00:d5:a6:d1}
	I0213 22:22:31.506004   32224 main.go:141] libmachine: (multinode-413653-m02) DBG | domain multinode-413653-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:d5:a6:d1 in network mk-multinode-413653
	I0213 22:22:31.506179   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHPort
	I0213 22:22:31.506356   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHKeyPath
	I0213 22:22:31.506506   32224 main.go:141] libmachine: (multinode-413653-m02) Calling .GetSSHUsername
	I0213 22:22:31.506645   32224 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18171-8990/.minikube/machines/multinode-413653-m02/id_rsa Username:docker}
	I0213 22:22:31.600912   32224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0213 22:22:31.613760   32224 status.go:257] multinode-413653-m02 status: &{Name:multinode-413653-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0213 22:22:31.613797   32224 status.go:255] checking status of multinode-413653-m03 ...
	I0213 22:22:31.614181   32224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0213 22:22:31.614220   32224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0213 22:22:31.628644   32224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38385
	I0213 22:22:31.629068   32224 main.go:141] libmachine: () Calling .GetVersion
	I0213 22:22:31.629489   32224 main.go:141] libmachine: Using API Version  1
	I0213 22:22:31.629506   32224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0213 22:22:31.629803   32224 main.go:141] libmachine: () Calling .GetMachineName
	I0213 22:22:31.630010   32224 main.go:141] libmachine: (multinode-413653-m03) Calling .GetState
	I0213 22:22:31.631521   32224 status.go:330] multinode-413653-m03 host status = "Stopped" (err=<nil>)
	I0213 22:22:31.631538   32224 status.go:343] host is not running, skipping remaining checks
	I0213 22:22:31.631545   32224 status.go:257] multinode-413653-m03 status: &{Name:multinode-413653-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.03s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-413653 node start m03 --alsologtostderr: (28.91045794s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-413653 node delete m03: (1.03404972s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (447.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-413653 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0213 22:37:03.710445   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:39:11.138089   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:39:21.413499   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:42:03.710040   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 22:42:24.461626   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:44:11.137454   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:44:21.413574   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-413653 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m26.928854856s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-413653 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (447.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-413653
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-413653-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-413653-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.473337ms)

                                                
                                                
-- stdout --
	* [multinode-413653-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-413653-m02' is duplicated with machine name 'multinode-413653-m02' in profile 'multinode-413653'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-413653-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-413653-m03 --driver=kvm2  --container-runtime=crio: (48.961648908s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-413653
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-413653: exit status 80 (231.588147ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-413653
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-413653-m03 already exists in multinode-413653-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-413653-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.15s)

                                                
                                    
x
+
TestScheduledStopUnix (120.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-842696 --memory=2048 --driver=kvm2  --container-runtime=crio
E0213 22:50:06.759009   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-842696 --memory=2048 --driver=kvm2  --container-runtime=crio: (48.611220074s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-842696 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-842696 -n scheduled-stop-842696
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-842696 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-842696 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-842696 -n scheduled-stop-842696
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-842696
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-842696 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-842696
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-842696: exit status 7 (80.05796ms)

                                                
                                                
-- stdout --
	scheduled-stop-842696
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-842696 -n scheduled-stop-842696
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-842696 -n scheduled-stop-842696: exit status 7 (76.433004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-842696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-842696
--- PASS: TestScheduledStopUnix (120.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (213.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1842158999 start -p running-upgrade-905186 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0213 22:52:03.710510   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1842158999 start -p running-upgrade-905186 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.924893531s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-905186 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-905186 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.407689678s)
helpers_test.go:175: Cleaning up "running-upgrade-905186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-905186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-905186: (1.183122486s)
--- PASS: TestRunningBinaryUpgrade (213.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (259.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m59.154308937s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-181202
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-181202: (3.400443134s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-181202 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-181202 status --format={{.Host}}: exit status 7 (98.98672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.351257103s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-181202 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (96.832843ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-181202] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-181202
	    minikube start -p kubernetes-upgrade-181202 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1812022 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-181202 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-181202 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.176507774s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-181202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-181202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-181202: (1.188079117s)
--- PASS: TestKubernetesUpgrade (259.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-890312 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-890312 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (91.687932ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-890312] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-890312 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-890312 --driver=kvm2  --container-runtime=crio: (1m41.205548738s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-890312 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-890312 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-890312 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.748184991s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-890312 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-890312 status -o json: exit status 2 (305.945345ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-890312","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-890312
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-890312: (1.138683415s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1995856703 start -p stopped-upgrade-798342 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0213 22:54:11.137401   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1995856703 start -p stopped-upgrade-798342 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m3.824258108s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1995856703 -p stopped-upgrade-798342 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1995856703 -p stopped-upgrade-798342 stop: (2.317749995s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-798342 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-798342 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.168060867s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-890312 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0213 22:54:21.413921   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-890312 --no-kubernetes --driver=kvm2  --container-runtime=crio: (57.994393493s)
--- PASS: TestNoKubernetes/serial/Start (57.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-890312 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-890312 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.785899ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (11.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.498648297s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (8.341207407s)
--- PASS: TestNoKubernetes/serial/ProfileList (11.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-890312
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-890312: (1.342247889s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-890312 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-890312 --driver=kvm2  --container-runtime=crio: (51.427318455s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (51.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-798342
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-798342: (1.162678929s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-890312 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-890312 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.741865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Start (159.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-998671 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0213 22:57:03.710175   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-998671 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m39.336874496s)
--- PASS: TestPause/serial/Start (159.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-397221 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-397221 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (170.665987ms)

                                                
                                                
-- stdout --
	* [false-397221] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18171
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0213 22:57:45.861001   44540 out.go:291] Setting OutFile to fd 1 ...
	I0213 22:57:45.861197   44540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:57:45.861209   44540 out.go:304] Setting ErrFile to fd 2...
	I0213 22:57:45.861217   44540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0213 22:57:45.861573   44540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18171-8990/.minikube/bin
	I0213 22:57:45.862512   44540 out.go:298] Setting JSON to false
	I0213 22:57:45.863959   44540 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6017,"bootTime":1707859049,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1051-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0213 22:57:45.864045   44540 start.go:138] virtualization: kvm guest
	I0213 22:57:45.866864   44540 out.go:177] * [false-397221] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0213 22:57:45.868318   44540 out.go:177]   - MINIKUBE_LOCATION=18171
	I0213 22:57:45.869628   44540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0213 22:57:45.868386   44540 notify.go:220] Checking for updates...
	I0213 22:57:45.870995   44540 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18171-8990/kubeconfig
	I0213 22:57:45.872301   44540 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18171-8990/.minikube
	I0213 22:57:45.873609   44540 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0213 22:57:45.875000   44540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0213 22:57:45.877268   44540 config.go:182] Loaded profile config "cert-expiration-675174": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:57:45.877444   44540 config.go:182] Loaded profile config "cert-options-472714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:57:45.877598   44540 config.go:182] Loaded profile config "pause-998671": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0213 22:57:45.877716   44540 driver.go:392] Setting default libvirt URI to qemu:///system
	I0213 22:57:45.933400   44540 out.go:177] * Using the kvm2 driver based on user configuration
	I0213 22:57:45.934659   44540 start.go:298] selected driver: kvm2
	I0213 22:57:45.934688   44540 start.go:902] validating driver "kvm2" against <nil>
	I0213 22:57:45.934713   44540 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0213 22:57:45.936925   44540 out.go:177] 
	W0213 22:57:45.938189   44540 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0213 22:57:45.939464   44540 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-397221 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-397221" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 13 Feb 2024 22:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.191:8443
name: cert-expiration-675174
contexts:
- context:
cluster: cert-expiration-675174
extensions:
- extension:
last-update: Tue, 13 Feb 2024 22:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-675174
name: cert-expiration-675174
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-675174
user:
client-certificate: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/cert-expiration-675174/client.crt
client-key: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/cert-expiration-675174/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-397221

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-397221"

                                                
                                                
----------------------- debugLogs end: false-397221 [took: 5.66967908s] --------------------------------
helpers_test.go:175: Cleaning up "false-397221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-397221
--- PASS: TestNetworkPlugins/group/false (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-245122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-245122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m32.328701165s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (119.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-778731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-778731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m59.799152704s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (119.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-998671 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0213 22:59:04.462586   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
E0213 22:59:11.137572   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 22:59:21.413464   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-998671 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.030469865s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.06s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-998671 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-998671 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-998671 --output=json --layout=cluster: exit status 2 (268.348189ms)

                                                
                                                
-- stdout --
	{"Name":"pause-998671","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-998671","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-998671 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-998671 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-998671 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-998671 --alsologtostderr -v=5: (1.065333993s)
--- PASS: TestPause/serial/DeletePaused (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (104.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-340656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-340656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m44.902131461s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (104.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-245122 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c64fb331-f46d-44fb-a6fe-cc7e421d13ee] Pending
helpers_test.go:344: "busybox" [c64fb331-f46d-44fb-a6fe-cc7e421d13ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c64fb331-f46d-44fb-a6fe-cc7e421d13ee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005464006s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-245122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-778731 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4cb33ea1-7203-4b60-8d50-75d180ec97f8] Pending
helpers_test.go:344: "busybox" [4cb33ea1-7203-4b60-8d50-75d180ec97f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4cb33ea1-7203-4b60-8d50-75d180ec97f8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005157882s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-778731 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-245122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-245122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-083863 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-083863 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m40.353889692s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-778731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-778731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010122573s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-778731 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-340656 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2252d11f-5554-4621-8896-57a676bcfbab] Pending
helpers_test.go:344: "busybox" [2252d11f-5554-4621-8896-57a676bcfbab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2252d11f-5554-4621-8896-57a676bcfbab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005268086s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-340656 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-340656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-340656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.121045839s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-340656 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [55f8e193-d3c8-4aa1-9e96-f1b8a8375325] Pending
helpers_test.go:344: "busybox" [55f8e193-d3c8-4aa1-9e96-f1b8a8375325] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [55f8e193-d3c8-4aa1-9e96-f1b8a8375325] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004535622s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-083863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-083863 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.127116072s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-083863 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (434.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-245122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-245122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (7m14.180105791s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-245122 -n old-k8s-version-245122
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (434.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (903.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-778731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-778731 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (15m2.771895572s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-778731 -n no-preload-778731
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (903.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (832.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-340656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0213 23:04:21.413693   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-340656 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (13m51.775545399s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-340656 -n embed-certs-340656
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (832.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (827.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-083863 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0213 23:06:46.760132   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 23:07:03.711645   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
E0213 23:09:11.137185   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 23:09:21.413384   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-083863 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (13m47.392792465s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-083863 -n default-k8s-diff-port-083863
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (827.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (63.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-120411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-120411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m3.04725067s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (63.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (107.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m47.929238746s)
--- PASS: TestNetworkPlugins/group/auto/Start (107.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-120411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-120411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.369403124s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-120411 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-120411 --alsologtostderr -v=3: (3.117261754s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-120411 -n newest-cni-120411
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-120411 -n newest-cni-120411: exit status 7 (92.052941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-120411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (55.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-120411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0213 23:29:11.137162   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/functional-407129/client.crt: no such file or directory
E0213 23:29:21.414000   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/addons-548360/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-120411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (54.679498568s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-120411 -n newest-cni-120411
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (55.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m12.880347232s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-120411 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-120411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-120411 -n newest-cni-120411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-120411 -n newest-cni-120411: exit status 2 (292.289128ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-120411 -n newest-cni-120411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-120411 -n newest-cni-120411: exit status 2 (309.441964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-120411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-120411 -n newest-cni-120411
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-120411 -n newest-cni-120411
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (112.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m52.493618093s)
--- PASS: TestNetworkPlugins/group/calico/Start (112.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nwbbj" [3d7b9ac5-3ccd-4188-82ea-e58dc9d91dab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 23:30:30.213237   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.218498   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.228874   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.249240   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.289587   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.369961   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.530535   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:30.851298   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:31.491578   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:32.772529   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:35.332844   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nwbbj" [3d7b9ac5-3ccd-4188-82ea-e58dc9d91dab] Running
E0213 23:30:36.089536   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.094796   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.105061   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.125370   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.166528   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.246880   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.407334   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:36.727694   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:37.368410   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:38.649311   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.005196539s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-397221 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0213 23:30:46.330474   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
E0213 23:30:50.694794   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:30:56.570652   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m35.224028745s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (124.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m4.242267089s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (124.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tc2dm" [8a8ba646-c3a4-4512-baa4-8b5c26657c39] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00528376s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g49jl" [a4921eee-2f91-4f4b-b42f-2e862e3ff92f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 23:31:11.175045   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-g49jl" [a4921eee-2f91-4f4b-b42f-2e862e3ff92f] Running
E0213 23:31:17.051352   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.014896636s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-397221 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0213 23:31:52.135913   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
E0213 23:31:58.011614   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/no-preload-778731/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m39.376123267s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8brf4" [e4c49356-7d1b-4b1c-8932-94e3b883e4ca] Running
E0213 23:32:03.709805   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/ingress-addon-legacy-741217/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00619827s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nvlg5" [5af2eff3-0469-424a-92a8-48757241e211] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nvlg5" [5af2eff3-0469-424a-92a8-48757241e211] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005266526s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9gxkm" [5f47860b-c5e8-436a-a2db-a860cc5f7ed0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0213 23:32:20.942737   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
E0213 23:32:20.948065   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
E0213 23:32:20.958384   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
E0213 23:32:20.978773   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
E0213 23:32:21.019190   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
E0213 23:32:21.099968   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
E0213 23:32:21.260582   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-9gxkm" [5f47860b-c5e8-436a-a2db-a860cc5f7ed0] Running
E0213 23:32:31.183664   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.004539677s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-397221 exec deployment/netcat -- nslookup kubernetes.default
E0213 23:32:21.581718   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-397221 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0213 23:32:41.424785   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/default-k8s-diff-port-083863/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-397221 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m48.530029739s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cccwj" [18e62b65-f089-43a4-a399-beefc385d658] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cccwj" [18e62b65-f089-43a4-a399-beefc385d658] Running
E0213 23:33:14.056187   16200 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/old-k8s-version-245122/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005607813s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-397221 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vwp7w" [40b2e71d-9560-4f90-89b3-7505678d1df2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005466502s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q5r9v" [cfeb8bc8-f796-43bc-8329-dead06957bb9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q5r9v" [cfeb8bc8-f796-43bc-8329-dead06957bb9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004517674s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-397221 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-397221 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-397221 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jdkwl" [a8e02e05-38ca-4b72-898a-f0c6567c6db3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jdkwl" [a8e02e05-38ca-4b72-898a-f0c6567c6db3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006681005s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-397221 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-397221 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (39/310)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
263 TestStartStop/group/disable-driver-mounts 0.18
267 TestNetworkPlugins/group/kubenet 3.82
275 TestNetworkPlugins/group/cilium 4.24
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-755510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-755510
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-397221 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-397221" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 13 Feb 2024 22:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.191:8443
name: cert-expiration-675174
contexts:
- context:
cluster: cert-expiration-675174
extensions:
- extension:
last-update: Tue, 13 Feb 2024 22:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-675174
name: cert-expiration-675174
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-675174
user:
client-certificate: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/cert-expiration-675174/client.crt
client-key: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/cert-expiration-675174/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-397221

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-397221"

                                                
                                                
----------------------- debugLogs end: kubenet-397221 [took: 3.601142499s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-397221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-397221
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-397221 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-397221" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18171-8990/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 13 Feb 2024 22:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.191:8443
name: cert-expiration-675174
contexts:
- context:
cluster: cert-expiration-675174
extensions:
- extension:
last-update: Tue, 13 Feb 2024 22:57:06 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-675174
name: cert-expiration-675174
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-675174
user:
client-certificate: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/cert-expiration-675174/client.crt
client-key: /home/jenkins/minikube-integration/18171-8990/.minikube/profiles/cert-expiration-675174/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-397221

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-397221" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-397221"

                                                
                                                
----------------------- debugLogs end: cilium-397221 [took: 4.044676398s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-397221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-397221
--- SKIP: TestNetworkPlugins/group/cilium (4.24s)

                                                
                                    
Copied to clipboard